Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Poster Session 2
Time:
Tuesday, 08/July/2025:
4:00pm - 4:30pm

Location: Zone 2 - Cafetaria

KU Leuven Ghent Technology Campus Gebroeders De Smetstraat 1, 9000 Gent

Show help for 'Increase or decrease the abstract text size'
Presentations

Rebalancing CAPEX and OPEX to Mitigate Uncertainty and Enhance Energy Efficiency in Renewable Energy-Fed Chemical Processes

Ghida Mawassi, Alessandro Di Pretoro, Ludovic Montastruc

LGC (INP - ENSIACET), France

The conventional approach in process engineering design has always been based on the exploitation of the degrees of freedom of a process system for the optimization of the operating conditions with respect to a selected objective function. The latter was usually defined based on the best compromise between capital and operating expenses. However, although the first cost item has played a role of major importance during the life period of the industrial sector focused on the production capacity expansion, the operating aspect is becoming more and more predominant in the current industrial landscape due to the increasing concerns towards carbon-free energy sources and the higher equilibrium between offer and demand. In essence, the reliance on fluctuating and intermittently available energy resources - renewable resources - is increasing, making it essential to maximize product output while minimizing energy consumption.

Based on these observations, it appears evident that the acceptance of higher investments for an improvement in the process performances could be a fruitful opportunity to further improve the efficiency of energy intensified and renewables fed chemical processes. To explore the potential of this design paradigm reconsideration from a quantitative perspective, a dedicated biogas-to-methanol case study was set up for a comparative study. The reaction and separation sections for grade AA biomethanol production were designed and simulated based on both the total annualized and utility costs minimization and compared. The optimal choice was to focus on the most energy-intensive section of the process, the purification. To this end, distillation columns were intentionally oversized. Although this approach increased the initial investment cost, it led to significant energy savings.

The investment increase for each layout and the corresponding energy savings were assessed and analyzed. The outcome of the simulation shows relevant improvements in terms of energy savings equal to 15 % with respect to the conventional layout. As a consequence, the possibility of establishing a new break-even operating point between equipment and utilities related expenses as the optimal decision at the design stage is worth to be analyzed in deeper detail in future studies. Notably, this break-even point is extremely dependent on both the cost and availability of energy. In scenarios where energy availability is limited or costs are higher, the advantages of oversizing become more pronounced.



Operational and Economic Feasibility of Green Solvent-Based Extractive Distillation for 1,3-Butadiene Recovery: A Comparison with Conventional Toxic Solvents

João Pedro Gomes1, Rodrigo Silva2, Clemente Nunes3, Domingos Barbosa1

1LEPABE / ALiCE, Faculdade de Engenharia da Universidade do Porto; 2Repsol Polímeros, S.A., Complexo Petroquímico; 3CERENA, Instituto Superior Técnico

The increasing demand for safer and environmentally friendly processes in the petrochemical industry requires replacing harmful solvents with safer alternatives. One such process, extractive distillation (ED) of 1,3-butadiene, typically employs potential toxic solvents like N,N-dimethylformamide (DMF) and N-methyl-2-pyrrolidone (NMP). Although highly effective, these solvents may pose significant health and environmental risks. This study explores the viability of using propylene carbonate (PC), a green solvent, as a substitute in the butadiene ED process.

A comprehensive simulation study using Aspen Plus® was conducted to model the PC behavior in comparison with DMF (Figure 1). Due to the scarcity of experimental data for the system PC/C4 hydrocarbons, it was crucial to have a reliable prediction of vapor-liquid equilibrium (VLE) to derive accurate pairwise interaction parameters (bij) and ensure a more realistic representation of molecular interactions. Initially, the COSMO-RS (Conductor-like Screening Model for Real Solvents) was employed, leveraging its quantum chemical foundation to predict VLE based on molecular surface polarization charge densities. Subsequently, new energy interaction parameters were obtained for the Non-Random Two-Liquid (NRTL) model, coupled with the Redlich-Kwong (RK) equation of state, a model that is particularly effective for systems with non-ideal behavior, such as those involving polar compounds, strong molecular interactions (like hydrogen bonding), and highly non-ideal mixtures. Thus, making it particularly well-suited for systems, such as those present in the extractive distillation processes. Key operational parameters, such as energy consumption, solvent recovery, and product purity, were evaluated to assess the process efficiency and feasibility. Additionally, an energy analysis of the process with the new solvent was conducted to evaluate its energy-saving potential. This was achieved using the pinch methodology from the Aspen Energy Analysis tool to optimize the existing process for the new solvent. Economic evaluations, including capital (CapEx) and operational costs (OpEx), were carried out to provide a holistic comparison between the solvents.

The initial analysis of the solvent's selectivity showed slightly lower selectivity compared to the conventional, potentially toxic, solvents, along with a higher boiling point. As a consequence, higher solvent-to-feed ratio may be required to achieve the desired separation efficiency. The higher boiling point will also require increased heat duties, leading to higher overall energy consumption. Nevertheless, the study underscores the potential of this green solvent to improve the sustainability of petrochemical processes while striving to maintain economic feasibility.



Optimizing Heat Recovery: Advanced Design of Integrated Heat Exchanger Networks with ORCs and Heat Pumps

Zinet Mekidiche Martínez, José Antonio Caballero Suárez, Juan Labarta

Universidad de Alicante, Spain

An advanced model has been developed to facilitate the simultaneous design of heat exchanger networks integrated with organic Rankine[JACS1] cycles (ORCs) and heat pumps, addressing two primary objectives. The model utilizes heat pumps to reduce reliance on external services by enhancing heat recovery within the system. Secondly, ORCs capitalize on residual heat streams or generate additional energy, effectively integrating with the existing heat exchanger network.

Effective integration of these components requires careful selection of fluids for the ORCs and heat pumps and determining optimal operating temperatures for these cycles to achieve maximum efficiency, the heat exchanger network, in which inlet and outlet temperatures are not necessarily fixed, the number of Organic Rankine cycles and heat pumps, as well as their operating conditions, are simultaneously optimized.

This method aims to minimize costs associated with external services, electricity, and equipment such as compressors and turbines. The approach leads to the design of a heat exchanger network that optimizes both the use of residual heat streams and the integration of other streams within the system. This not only enhances operational efficiency and sustainability but also demonstrates the potential of incorporating an Organic Rankine Cycle (ORC) with various energy streams, not limited solely to residual heat.



CO2 recycling plant for decarbonizing hard-to-abate industries: Empirical modelling and Process design of a CCU plant- A case study

Jose Antonio Abarca, Stephanie Arias-Lugo, Lucia Gomez-Coma, Guillermo Diaz-Sainz, Angel Irabien

Departamento de Ingenierías Química y Biomolecular, Universidad de Cantabria

Achieving a net-zero CO2 society by 2050 is an ambitious target set by the European Commission Green Deal. Reaching this goal will require implementing various strategies to reduce CO2 emissions. Conventional decarbonization approaches are well-established, such as using renewable energies, electrification, and improving energy efficiency. However, different industries, known as "hard-to-abate sectors," face unique challenges due to the inherent CO2 emissions from their processes. For these sectors, alternative strategies must be developed. Carbon Capture and Utilization (CCU) technologies offer a promising and sustainable solution by capturing CO2 and converting it into valuable chemicals, thereby contributing to the circular economy.

This study focuses on designing a CO2 recycling plant for the cement or textile industry as a case study. The proposed plant integrates a CO2 capture process using membrane technology and a utilization stage where CO2 is electrochemically converted into formic acid. During the capture stage, several experiments are carried out at varying inlet concentrations to optimize process parameters and maximize the CO2 output flow. The membrane capture potential is determined by its CO2 permeability and selectivity, making highly selective membranes for efficient CO2 separation from the flue gas stream. Key variables affecting the capture process include flue gas concentration, inlet pressure, and total membrane area. Previous laboratory studies have demonstrated that at least a minimum CO2 concentration of 50 % and a flow rate of 15 mL min-1 cm-2 electrode are required for an efficient CO2 conversion to formic acid [1]. Thus, these variables are crucial for an effective integration of the capture and utilization stages.

For the utilization stage, a three-compartment electrochemical cell is proposed for the direct production of formic acid via CO2 electroreduction. The primary operational variables influencing formic acid production include the CO2 inlet flow rate and composition (determined by the capture stage), applied current density, inlet stream humidity, and water flow rate in the central compartment [2].

The coupling of capture and utilization stages is necessary for the development of CO2 recycling plants. However, it remains in the early stages, especially for the integration of membrane capture technologies and CO2 electroreduction. This work aims to empirically model both the CO2 capture and electroreduction systems using neural networks, resulting in an integrated predictive model for the entire CO2 recycling plant. This model will optimize the performance of the capture-utilization system, facilitating the design of a sustainable process for CO2 capture and conversion into formic acid. Ultimately, this approach will contribute to reducing the products carbon footprint.

Acknowledgments

The authors acknowledge the financial support received from the Spanish State Research Agency through the project PLEC2022-009398 MCIN/AEI/10.13039/501100011033 and Unión Europea Next Generation EU/PRTR. This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101118265. Jose Antonio Abarca acknowledges the predoctoral research grant (FPI) PRE2021-097200.

[1] G. Diaz-Sainz, J. A. Abarca, M. Alvarez-Guerra, A. Irabien, Journal of CO2 Utilization. 2024, 81, 102735

[2] J. A. Abarca, M. Coz-Cruz, G. Diaz-Sainz, A. Irabien, Computer Aided Chemical Engineering, 2024, 53, pp. 2827-2832



Integration of direct air capture with CO2 utilization technologies powered by renewable energy sources to deliver negative carbon emissions

Calin-Cristian Cormos1, Arthur-Maximilian Bathori1, Angela-Maria Kasza1,2, Maria Mihet2, Letitia Petrescu1, Ana-Maria Cormos1

1Babes-Bolyai University, Faculty of Chemistry and Chemical Engineering, Romania; 2National Institute for Research and Development of Isotopic and Molecular Technologies, Romania

Reduction of greenhouse gas emissions is an important environmental element to actively combat the global warming and climate change. To achieve climate neutrality by the middle of this century, several options are envisaged such as increasing the share of renewable energy sources (e.g., solar, wind, biofuels etc.) to gradually replace the fossil fuels, large-scale implementation of Carbon Capture and Utilization (CCUS) technologies, improving overall energy efficiencies of both production and utilization steps etc. In respect to reduce the CO2 concentration from the atmosphere, the Direct Air Capture (DAC) options are of particular interest and very promising in delivering negative carbon emissions. The negative carbon emission is a key element for climate neutrality to balance the still remaining positive emission systems and the hard-to-decarbonize processes. The integration of renewable-powered DAC systems with CO2 utilization technologies can deliver both negative carbon emissions and reduce the energy and economic penalties of such promising decarbonization processes.

This work evaluates the innovative energy-efficient potassium - calcium looping cycle as promising direct air capture technology integrated with various CO2 catalytic transformations into basic chemicals (e.g., synthetic natural gas, methanol etc.). The integrated system will be powered by renewable energy (in terms of both heat and electricity requirements). The investigated DAC concept is set to capture 1 Mt/y CO2 with about 75% carbon capture rate. A fraction of this captured CO2 stream (about 5 - 10%) will be catalytically converted into synthetic methane or methanol using green hydrogen produced by water electrolysis, the rest being sent to geological storage. Conceptual design, process modelling, and model validation followed by overall energy optimization done by thermal integration analysis, were relevant engineering tools used to assess the global mass and energy balances for quantifying key techno-economic and environmental performance indicators. As the results show, the integrated DAC - CO2 utilization system, powered by renewable energy, has promising performances in terms of delivering negative carbon emissions and reduced ancillary energy consumptions. However, significant technological developments (e.g., scale-up, reducing solvent and sorbent make-ups, better process intensification and integration, improved catalysts) are still needed to advance this innovative technology from the current state of the art to a relevant industrial size.



Repurposing Existing Combined Cycle Power Plants with Methane Production for Renewable Energy Storage

Diego Santamaría, Antonio Sánchez, Mariano Martín

Department of Chemical Engineering, University of Salamanca, Plz Caidos 1-5, 37008, Salamanca, Spain

Nowadays, various technologies exist to generate renewable energy, such as solar, wind, hydroelectric power, etc. However, most of these energy sources have fluctuations due to the weather variations. A reliable energy storage is essential to promote a higher share of renewable energy integration into the current energy system. Moreover, energy storage keeps energy security under control. Power-to-Gas technologies consist of storage renewable energy in the form of gaseous chemicals. In this case, Power-to-Methane is the technology of choice since methane allows the use of existing infrastructures for it transport and storage.

This work proposes the integration and optimization of methane energy storage into the existing combined cycle power plants. This involves the introduction of carbon capture systems and methane production reusing the existing power production section. The process leverages renewable energy to produce hydrogen, which is then transformed into methane for easier storage. When the energy demand arises, the stored methane is burned in the combined cycle power plant. Two wastes are produced: water and CO2. The water produced is collected and returned to the electrolyzer while the CO2 is captured and then it is combined with hydrogen to synthesize methane again (Ghaib & Ben-Fares, 2018). This results in a circular process that repurposing the existing infrastructure.

Two different types of combustion method, ordinary and oxy-combustion (Elias et al., 2018) are optimized to evaluate both alternatives and their economic feasibility. In ordinary combustion, air is used as the oxidizer, while in oxy-combustion, pure oxygen is employed, including the oxygen produced in the electrolyzer. However, CO2 recirculation is necessary in oxy-combustion to prevent excessive the flame temperature (Stanger et al., 2015). In addition, is also analysed the potential energy storage capacity of the existing combined cycle power plants in a country, specifically across Spain. This would avoid their decommissioning and reuse the natural gas distribution network, adapting it for use in conjunction with a renewable energy storage system.

References

Elias, R. S., Wahab, M. I. M., & Fang, L. (2018). Retrofitting carbon capture and storage to natural gas-fired power plants: A real-options approach. Journal of Cleaner Production, 192, 722–734.

Ghaib, K., & Ben-Fares, F.-Z. (2018). Power-to-Methane: A state-of-the-art review. Renewable and Sustainable Energy Reviews, 81, 433–446.

Stanger, R., Wall, T., Spörl, R., Paneru, M., Grathwohl, S., Weidmann, M., Scheffknecht, G., McDonald, D., Myöhänen, K., Ritvanen, J., Rahiala, S., Hyppänen, T., Mletzko, J., Kather, A., & Santos, S. (2015). Oxyfuel combustion for CO2 capture in power plants. International Journal of Greenhouse Gas Control, 40, 55–125.



Powering chemical processes with variable renewable energy: A case of iron making in steel industry

Dorcas Tuitoek, Daniel Holmes, Binjian Nie, Aidong Yang

University of Oxford, United Kingdom

The steel industry is responsible for ~8% of global energy demand and emits 7% of CO2 emissions annually 1. Increased adoption of renewable energy in the iron-making process, which is the primary step of the steel-making process, is one of the promising ways to decarbonise the industry. The intermittent nature of renewable energy, as well as the difficulty in storing it, causes a variable energy supply profile necessitating a shift in the operation modes of manufacturing processes to make efficient use of renewable energy. Through dynamic simulation, this study explores a case of the direct reduction process, where iron ore is charged to a shaft furnace reactor where it is reduced to solid iron with green hydrogen.
Existing mathematical modelling and simulation studies of the shaft furnace have only investigated its behaviour assuming constant gas and solid feed rates. Here, we simulate iron ore reduction in a 1D model using COMSOL Multiphysics, with intermittent hydrogen supply, to predict the effects of a time-varying hydrogen feed on the degree of iron ore reduction. The dynamic model of the counter-current moving bed captures chemical reaction kinetics ,mass and heat transfer. With settings relevant to industrial scale operations, our results show that the system can tolerate drops of hydrogen feed rate by up to ~10% without leading to a reduction in the metallisation rate of the product. To tolerate greater fluctuation of H2 feed rate, strategies were tested which could alter the residence time and change the thermal profile in the reactor, to ensure complete metallic iron formation.
These findings show that it is possible to operate a shaft furnace with a certain degree of hydrogen feed variability, hence providing an approach to mitigating the challenges of intermittent renewable energy supply as a solution to decarbonize industries.

1. International Energy Agency (IEA). Iron and Steel Technology Roadmap. Towards More Sustainable Steelmaking. https://www.iea.org/reports/iron-and-steel-technology-roadmap (2020).



Early-Stage Economic and Environmental Assessment for Emerging Chemical Technologies: Back-casting Approach

Yeonguk Kim, Dami Kim, Kosan Roh

Chungnam National University, Korea, Republic of (South Korea)

The emergence of alternative chemical technologies has made their reliable economic and environmental assessments indispensable for guiding future research and development. However, these assessments are inherently challenging due to the lack of comprehensive understanding and technical knowledge of such technologies, particularly at low technology readiness levels (TRLs). This knowledge gap complicates accurate predictions of their real-world performance, economics, and potential environmental impacts. To address these challenges, we adopt a back-casting approach to demonstrate a TRL-based early-stage evaluation procedure, as previously proposed by Roh et al. (2020, Green Chem. 22, 3842). In this work, we apply this framework to methanol production based on the reforming of natural gas, which is a mature chemical technology, to explore its suitability for evaluating emerging chemical technologies. The target technology is assumed to be at three distinct stages of maturity: theoretical, intermediate, and engineering stages. We analyze economic and environmental indicators of the technology using the available information at each stage and then see how similar the indicators calculated at the theoretical and intermediate stages are compared to those at the engineering stage. The analysis shows that the performance indicators are lowest at the theoretical stage due to relying solely on reaction stoichiometry. In the case of the intermediate stage, despite considering various factors, it yields slightly higher performance indicators than the engineering stage due to the lack of process optimization. The outcomes of this study enable a proactive assessment of emerging chemical technologies, providing insights into their feasibility at various stages of development.



A White-Box AI Framework for Interpretable Global Warming Potential Prediction

Jaewook Lee, Ethan Errington, Miao Guo

King's College London, United Kingdom

The transformation of the chemical industry towards sustainable manufacturing requires reliable yet robust decision-making tools involving Life Cycle Assessment (LCA). LCA offers a standardised method to evaluate the environmental profiles of chemical processes and products. However, with the emergence of numerous novel chemicals and processes, existing LCA Inventory databases are increasingly resource-intensive to develop, often delayed in reporting, and suffer from data gaps. Research efforts have been made to address the knowledge gaps by developing predictive models that can estimate LCA properties based on chemical structures. However, the previously published research has been hampered dataset availability and reliance on complex black-box models such as Deep Neural Network (DNN), which often provide low predictive accuracy and lack the interpretability needed for industrial adoption. Understanding the rationale behind model predictions is crucial, particularly in industrial applications where decision-making relies on both the accuracy and transparency. In this study, we introduce a Kolmogorov–Arnold Networks (KAN) model based LCA predictions for emerging chemicals. Designed to bridge the gap between accuracy and interpretability by incorporating domain knowledge into the learning process.

We utilized 15 key LCA categories from the Ecoinvent v3.8 database, comprising 2,239 data points. To address large data scale variation, we applied logarithmic transformation. Using chemical structures represented as SMILES, we converted them into MACCS keys (166-bit fingerprints) and Mordred descriptors (1,825 physicochemical properties), incorporating features like molecular weight and hydrophobicity. These features were used to train a KAN, Random Forest, and DNN to predict LCA values across all categories. KAN consistently outperformed Random Forest and DNN models in 12 out of 15 LCA categories, achieving an average R² value of 74% compared to 66% and 67% for Random Forest and DNNs, respectively. For critical categories like Global Warming Potential, Terrestrial Ecotoxicity, and Ozone Formation–Human Health, KAN achieved high predictive accuracies of 0.84, 0.86, and 0.87, respectively, demonstrating an 8% improvement in overall accuracy. Our feature analysis indicated that MACCS keys provided nearly the same predictive power as Mordred descriptors, despite containing significantly fewer features. Furthermore, we identified that retaining data points with extremely large LCA values (top 3%) could degrade model performance, underscoring the importance of careful data curation. In terms of model interpretability, the use of Gini importance and SHapley Additive exPlanations (SHAP) revealed that functional groups such as halogens, oxygen, and methyl groups had the most significant impact on LCA predictions, aligning with domain knowledge. The SHAP analysis further highlighted that KAN was able to capture more complex structure-property relationships compared to conventional models.

In conclusion, the application of the KAN model for LCA predictions provides a robust and accurate framework for evaluating the environmental impacts of emerging chemicals. By integrating domain-specific knowledge, this approach not only enhances the reliability of LCA prediction but also offers deeper insights into the structural drivers of environmental outcomes. Its demonstrated success in identifying key molecular features makes it a valuable tool for accelerating sustainable innovations in both chemical process transformations and drug development, where precise environmental assessments are essential.



Data-driven approach for reaction mechanism identification using neural ODEs

Junu Kim1,2, Itushi Sakata3, Eitaro Yamatsuta4, Hirokazu Sugiyama1

1The University of Tokyo, Japan; 2Auxilart Co., Ltd., Tokyo, 104-0061, Japan; 3Institute of Physical and Chemical Research, Hyogo, 660-0813, Japan; 4Independent researcher, Japan

In the fields of reaction engineering and process systems engineering, mechanistic models have traditionally been the focus due to their explainability and extrapolative power, as they are based on fundamental principles governing the system. For chemical reactions, kinetic studies are crucial in developing these mechanistic models, providing insights into reaction mechanisms and estimating model parameters [1, 2]. However, kinetic studies often require extensive cycles of experimental data acquisition, reaction pathway generation, model construction, and parameter estimation, making the process laborious and time-consuming. In response to these challenges, machine learning techniques have gained attention. A recent approach involves using neural network models trained on simulation data to classify reaction mechanisms [3]. While effective, these methods demand vast amounts of training data, and expanding the reaction boundaries further increases the data requirements.

In this study, we present a direct, data-driven approach to identifying reaction mechanisms and constructing mechanistic models from experimental data without the need for large datasets. As an initial attempt, we focused on amination and Grignard reactions, which are widely used for various chemical and pharmaceutical synthesis. Since chemical reactions can be expressed as differential equations, our hypothesis is that by calculating first- or higher-order derivatives directly from experimental data, we can estimate the relationships between different chemical compounds in the system and identify the reaction mechanism, order, and parameter values. The major challenge arises with real experimental data, where the number of data points is often limited (e.g., around ten), making it difficult to estimate differential values directly. To address this, we employed neural ordinary differential equations (neural ODEs) to effectively interpolate these sparse datasets [4]. By applying neural ODEs, we were able to generate interpolated data, which enabled the calculation of derivatives and the development of mechanistic models that accurately reproduce the observed data. For future work, we plan to validate our methodology across a broader range of reactions and further automate the process to enhance efficiency and applicability.

References

[1] P. Sagmeister, et al. React. Chem. Eng. 2023, 8, 2818. [2] S. Diab, et al. React. Chem. Eng. 2021, 6, 1819. [3] J. Bures and I. Larrosa Nature 2023, 613, 689. [4] R. T. Q. Chen et al. NeurlIPS. 2018.



Generalised Disjunctive Programming for Process Synthesis

Lukas Scheffold, Erik Esche

Technische Universität Berlin, Germany

Automating process synthesis presents a formidable challenge in chemical engineering. Particularly challenging is the development of frameworks that are both general and accurate, while remaining computationally tractable. To achieve generality, a building block-based modelling approach was proposed in previous contributions by Kuhlmann and Skiborowski [1] and Krone et al. [2]. This model formulation incorporates Phenomena-based Building Blocks (PBBs), capable of depicting a wide array of separation processes [1], [3]. To maximize accuracy, the PBBs are interfaced with CAPE-OPEN thermodynamics, allowing for detailed thermodynamic models [2] within the process synthesis problem. However, the pursuit of generality and accuracy introduces increased model complexity and poses the risk of combinatorial explosion. To address this and enhance tractability, [1] developed a structural screening method that forbids superstructures leading to infeasible configurations. These combined innovations allow for general, accurate, and tractable superstructures.

To further increase the solvable problem size, we propose an advanced optimization framework, leveraging generalized disjunctive programming (GDP). It allows for multiple improvements over existing MINLP formulations, aiming at improving feasibility and solution time. This is achieved by deactivation of unused model equations during the solution procedure. Additionally, Grossmann [4] showed that a disjunctive branch-and-bound algorithm can be postulated. This provides tighter bounds for linear problems than those obtained through reformulations used in conventional MINLP solvers, reducing the required solution time.

Building on these insights, it is of interest whether these findings extend to nonlinear systems. To investigate this, we developed a MathML/XML-based automatic code generation tool inside MOSAICmodeling [5], which formulates complex nonlinear GDP and exports them to conventional optimization environments (Pyomo, GAMS etc.). These are then coupled with structural screening methods [1] and solved using out-of-the-box functionalities for GDP solution. To validate the proposed approach, a case study is conducted involving two PBBs, previously published by Krone et al. [2]. The study compares the performance of the GDP-based optimization framework against conventional MINLP approaches. Preliminary results suggest that the GDP-based framework offers computational advantages over conventional MINLP formulations. The full paper will present detailed comparisons, offering insights into the practical applicability and benefits of GDP.

References

[1] H. Kuhlmann und M. Skiborowski, „Optimization-Based Approach To Process Synthesis for Process Intensification: General Approach and Application to Ethanol Dehydration,“ Industrial & Engineering Chemistry Research, Bd. 56, Nr. 45, p. 13461–13481, 2017.

[2] D. Krone, E. Esche, N. Asprion, M. Skiborowski und J.-U. Repke, „Enabling optimization of complex distillation configurations in GAMS with CAPE-OPEN thermodynamic models,“ Computers & Chemical Engineering, Bd. 157, p. 107626, 2022.

[3] H. Kuhlmann, M. Möller und M. Skiborowski, „Analysis of TBA‐Based ETBE Production by Means of an Optimization‐Based Process‐Synthesis Approach,“ Chemie Ingenieur Technik, Bd. 91, Nr. 3, p. 336–348, 2019.

[4] I. E. Grossmann, „Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques,“ Optimization and Engineering, Nr. 3, p. 227–252, 2002.

[5] E. Esche, C. Hoffmann, M. Illner, D. Müller, S. Fillinger, G. Tolksdorf, H. Bonart, G. Wozny und J. Repke, „MOSAIC – Enabling Large‐Scale Equation‐Based Flow Sheet Optimization,“ Chemie Ingenieur Technik, Bd. 89, Nr. 5, p. 620–635, 2017.



Optimal Design and Operation of Off-Grid Electrochemical Nitrogen Reduction to Ammonia

Michael Johannes Rix1, Judith M. Schwindling1, Karim Bidaoui1, Alexander Mitsos2,1,3

1RWTH Aachen University, Germany; 2JARA-ENERGY, 52056 Aachen, Germany; 3Energy Systems Engineering (ICE-1), Forschungszentrum Jülich, Germany

Electrochemical processes can aid in defossilizing the chemical industry. When operated off-grid with its own renewable electricity (RE) production, the electrochemical process and the RE plants must be optimized together. We optimize the design and operation of an electrochemical system for nitrogen reduction to ammonia coupled with wind and solar electricity generation to minimize ammonia production costs. Electrochemical nitrogen reduction allows ammonia production from RE, water, and air in one electrolyzer [1]. Comparable design and operation optimizations for coupling RE with electrochemical systems were already performed in the literature for different systems (e.g., for water electrolysis by [2] and others).

We optimize the design and operation of the electrolyzer and RE plant over the scope of one year. We calculate investment costs for the electrolyzer and RE plants annualized over their respective lifetime. We calculate the electricity production from weather data on hourly resolution and the design of the RE plant. From the design of the electrolyzer and the electricity production, we calculate the ammonia production. We investigate three operating strategies: (i) direct coupling of RE and electrolyzer, (ii) curtailment of electricity, and (iii) battery storage and curtailment. In direct coupling, the electrolyzer electricity consumption must follow the RE generation, thus the electrolyzer is sized for the peak power of the RE plant. Therefore, it can only be operated at full load at peak electricity generation which will be only at one or a few times of the year. Curtailment and battery storage allow the decoupling of electricity production and consumption. Thus, the electrolyzer can be operated at full or higher load multiple times of the year.

Operation with curtailment increases the load factor of the electrolyzer and reduces the production cost. The RE plant can be over-designed such that the electrolyzer can operate at full or higher load at off-peak RE generation. Achieving a high load factor and few on/off cycles of the electrolyzer is important since on/off cycles can lead to catalyst degradation due to reverse currents [3]. Implementation of battery storage can further increase the load factor of the electrolyzer. However, battery costs are too high, resulting in increased production costs.

We run the optimization for different locations with different RE potentials. At all locations, operation with curtailment is beneficial, and battery costs are too expensive. The availability of wind and solar determines the optimal design of the electrolyzer and RE plant, the optimal operation, the production cost, and the load factor.

References
1. MacFarlane, D. R. et al. A Roadmap to the Ammonia Economy. Joule 4, 1186–1205 (2020).
2. Hofrichter, A., et al. Determination of the optimal power
ratio between electrolysis and renewable energy to investigate the effects on the hydrogen
production costs. International Journal of Hydrogen Energy 48, 1651–1663 (2023).
3. Kojima, H. et al. Influence of renewable energy power fluctuations on water electrolysis
for green hydrogen production. International Journal of Hydrogen Energy 48, 4572–4593. (2023).



A Stochastic Techno-Economic Assessment of Emerging Artificial Photosynthetic Bio-Electrochemical Systems for CO₂ Conversion

Haris Saeed, Aidong Yang, Wei Huang

Oxford University, United Kingdom

Artificial Photosynthetic Bioelectrochemical Systems (AP-BES) are a promising technology for converting CO2 into valuable bioproducts, addressing both carbon mitigation and sustainable production challenges. By integrating biological and electrochemical processes to emulate natural photosynthesis, AP-BES offer potential for scalable, renewable biomanufacturing. However, their commercialization faces significant challenges related to process efficiency, system integration, and economic uncertainties. A thorough techno-economic assessment (TEA) is crucial for evaluating the viability and scalability of this technology. This study employs a stochastic TEA to assess the bioelectrochemical conversion of CO2 to bioproducts, accounting for variability and uncertainty in key technical and economic parameters. Unlike traditional deterministic TEA, which relies on fixed-point estimates, the stochastic approach uses probability distributions to capture a broader range of potential outcomes. Critical factors such as energy consumption, CO2 conversion efficiency, and bioproduct market prices are modeled probabilistically, offering a more accurate reflection of real-world uncertainties. The novelty of this research lies in its comprehensive application and advanced methodology. This study is one of the first to apply a full-system TEA to AP-BES, covering the entire process from carbon capture to product purification. Moreover, the stochastic approach, utilizing Monte Carlo simulations, enables a more robust analysis by incorporating uncertainties in both technical and economic factors. This combined methodology provides more realistic insights into the system's economic potential and commercial feasibility compared to conventional deterministic models. Monte Carlo simulations are used to generate probability distributions for key economic metrics, including total annualized cost (TAC), internal rate of return (IRR), and levelized cost of product (LCP). By performing thousands of iterations, the model offers a comprehensive understanding of AP-BES's financial viability, delivering confidence intervals and risk assessments often missing from deterministic approaches. Key variables include electricity price fluctuations, a significant driver of operating costs, and changes in bioproduct market prices due to varying demand. The model also accounts for uncertainties in future technological improvements, such as enhanced CO2 conversion efficiencies and potential economies of scale that could lower both capital expenditure (CAPEX) and operational expenditure (OPEX) per kg of CO2 processed. Sensitivity analyses further identify the most influential factors impacting economic outcomes, guiding future research and development. The results underscore the critical role of uncertainty in evaluating the economic viability of AP-BES. While the technology demonstrates significant potential for both economic and environmental benefits, substantial risks remain, particularly concerning electricity price volatility and unpredictable bioproduct markets. Compared to static point estimates in deterministic approaches, Monte Carlo simulations provide a more nuanced understanding of the financial risks and opportunities. This stochastic TEA offers valuable insights for optimizing processes, reducing costs, and guiding investment and research decisions in the development of artificial photosynthetic bioelectrochemical systems.



Empowering LLMs for Mathematical Reasoning and Optimization: A Multi-Agent Symbolic Regression System

Shaurya Vats, Sai Phani Chatti, Aravind Devanand, Sandeep Krishnan, Rohit Karanth Kota

Siemens Technology and Services Pvt. Ltd

Understanding data with complex patterns is a significant part of the journey toward accurate data prediction and interpretation. The relationships between input and output variables can unlock diverse advancement opportunities across various processes. However, most AI models attempting to uncover these patterns are not explainable or remain opaque, offering little interpretation. This paper explores an approach in explainable AI by introducing a multi-agent system (MaSR) for extracting equations between features using data.

We developed a novel approach to perform symbolic regression by discovering mathematical functions using a multi-agent system of LLMs. This system addresses the traditional challenges of genetic optimization, such as random seed generation, complexity, and the explainability of the final equation. The agent-based system divides the process into various steps, including initial function generation, loss and complexity calculation, mutation and crossbreeding of equations, and explanation of the final equation to improve the accuracy and decrease the workload.

We utilize the in-context learning capabilities of LLMs trained on vast amounts of data to generate accurate equations more quickly. Additionally, we incorporate methods like retrieval-augmented generation (RAG) with tabular data and web search to further enhance the process. The system creates an explainable model that clarifies each process step leading to the final equation for a given dataset. We also use the capability of the method in combination with existing technologies to develop innovative solutions, such as incorporating physical laws derived from data using multi-agent symbolic regression (MaSR) to reduce illogical predictions and improving extrapolations, passing the generated equations to LLMs as context for explaining the large number simulation results.

Our solution is compared with symbolic regression methods such as GPlearn and PySR against various benchmarks. This study presents research on expanding the reasoning capacities of large language models alongside their mathematical understanding. The paper serves as a benchmark in understanding the capabilities of LLMs in mathematical reasoning and can be a starting point for solving numerous complex tasks using LLMs. The MaSR framework can be applied in various areas where the reasoning capabilities of LLMs are tested for complex and sequential tasks. MaSR can explain the predictions of black-box models, develop data-driven models, identify complex relationships within the data, assist in feature engineering and feature selection, and generate synthetic data equations to address data scarcity, which are explored as further directions for future research in this paper.



Solid Oxide Cells and Hydrogen Storage to Prevent Grid Congestion

Dorsan Lepour, Arthur Waeber, Cédric Terrier, François Maréchal

École Polytechnique Fédérale de Lausanne, Switzerland

The electrification of heating and mobility sectors, alongside increasing photovoltaic (PV) capacities, places significant pressure on electricity grids, particularly in urban neighborhoods and densely populated zones. High penetration of heat pumps and electric vehicles as well as significant PV deployment can indeed induce supply shortfall or require curtailment, respectively. Grid reinforcement is a potential solution, but is costly and involves substantial structural engineering work. Altough some local energy storage systems have been extensively studied as an alternative (primarily batteries), the potential of integrating reversible solid oxide cells (rSOC) coupled with hydrogen storage in the context of urban energy systems planning remains underexplored. This study aims to address this gap by investigating the technical and economic feasibility of such systems at building or district-scale.

This work uses the framework of REHO (Renewable Energy Hub Optimizer), a decision-support tool for sustainable urban energy system planning. REHO takes into account the endogenous resources of a defined territory, diverse end-use demands (e.g., heating, mobility), and multiple energy carriers (electricity, heat, hydrogen). Multi-objective optimizations are conducted across economic, environmental, and energy efficiency criteria to determine under which circumstances the deployment of rSOC and hydrogen storage becomes relevant.

The study considers several typical districts with their import and export capacities and examines two key scenarios: (1) a closed-loop hydrogen system where hydrogen is produced and consumed locally, and (2) a scenario involving connection to a broader hydrogen network. Results indicate that in areas where grid capacity is strained, rSOC coupled wih hydrogen tank offer a compelling storage solution. They enhance energy self-consumption by converting surplus electricity into hydrogen for later use, while the heat generated during cell operation can be used to meet building space heating and domestic hot water demands.

These findings suggest that hydrogen-based energy storage can be a viable alternative to traditional grid reinforcement, particularly for areas facing an increased penetration of renewables in a saturated grid. The study highlights that for such regions approaching grid congestion, integrating local hydrogen capacities can provide both flexibility and efficiency gains while reducing the need for expensive grid upgrades.



A Modern Portfolio Theory Approach for Chemical Production with Supply Chain Considerations for Efficient Investment Planning

Mohamad Almoussaoui, Dhabia Al-mohannadi

Texas A&M University at Qatar, Qatar

The integrated supply chains of large chemical commodities and fuels play a major role in energy security. These supply chains are at risk of global shocks such as the COVID-19 pandemic [1]. As such, major natural gas producers and exporters such as Qatar aim to balance their supply chain investment returns with the export risks as the hydrocarbon sector contributes primarily to more than one-third of its Gross Demostic Product (GDP) [2]. Hence, this work introduces a modern portfolio theory (MPT) model formulation based on chemical commodities and fuel supply chains. The model uses Markowitz’s optimization [3] model to meet an exporting country’s financial objective of maximizing the investment return and minimizing the associated risk. By defining a supply chain asset as a combination of an exporting country, a traded chemical commodity, and an importing country, the model calculates the return for every supply chain investment, and the risk associated with the latter due to price fluctuations. Solving the optimization problem, a set of Pareto-optimal supply chain portfolios and the efficient frontier, is obtained. The model integrates both the chemical process production [4] and the shipping stages of a supply chain. This work's case study showcases the importance of considering the integrated supply chain in building the MPT model and its impact on the number and allocations of the resulting optimal portfolios. The developed model can guide investment planners to achieve their financial goals at a minimum risk.

References

[1]

M. Shehabi, "Modeling long-term impacts of the COVID-19 pandemic and oil price declines in Gulf oil economies," Economic Modelling, vol. 112, 2022.

[2]

"Qatar - Oil & Gas Field Machinery Equipment," 29 7 2024. [Online]. Available: https://www.trade.gov/country-commercial-guides/qatar-oil-gas-field-machinery-equipment. [Accessed 18 9 2024].

[3]

H. Markowitz, "PORTFOLIO SELECTION*," The Journal of Finance, vol. 7, no. 1, pp. 77-91, 1952.

[4]

S. Shehab, D. M. Al-Mohannadi and P. Linke, "Chemical production process portfolio optimization," Chemical Engineering Research and Design, vol. 167, pp. 207-217, 2021.



Co-gasification of crude glycerol and plastic waste using air/steam mixtures: a modelling approach

BAHIZIRE MARTIN MUKERU, BILAL PATEL

UNIVERSITY OF SOUTH AFRICA, South Africa

There has been an unprecedented growth in plastic waste and current management techniques such as landfilling and incineration are unsustainable, particularly due to the environmental impact associated with these practises [1]. Gasification is considered as one of the most sustainable ways not only to address these issues, but also produce energy from waste plastics [1]. However, issues such as tar and coke formation are associated with plastic waste gasification which reduces syngas quality [1],[2]. Another typical waste in huge quantities is crude glycerol, with low value, which is a by-product from biodiesel production. The cost involved in its purification is exceedingly high and therefore this limits its applications as a purified product [3]. Co-feeding plastic wastes with crude glycerol for syngas production cannot only address issues related to plastic gasification, but also allow the utilization of crude glycerol and enhance syngas quality [3]. This study evaluates the performance of a downdraft gasifier to produce hydrogen and syngas from the co-gasification of crude glycerol and plastic wastes, by means of thermodynamic analysis and modelling using Aspen Plus simulation software. Performance indicators such as cold gas efficiency (CGE), carbon conversion efficiency (CCE) and syngas yield (SY) to determine the technical feasibility of the co-gasification of crude glycerol and plastic wastes at different equivalent ratios (ER). Results demonstrated that an increase in ER increased CGE, CCE and SY. For a blend ratio of 50%, a CCE of 100% was attained at an ER of 0.35 whereas the CGE of 88.29% was attained at ER of 0.3. Increasing the plastic content to 75%, a maximum CCE and CGE of 94.16% and 81.86% were achieved at ER of 0.4. The hydrogen composition reached its maximum of 36.70% and 39.19% at an ER of 0.1 when the plastic ratio increased from 50% to 75% respectively. A 50% plastic bend ratio achieved a syngas ratio (H2/CO) of 1.99 at ER of 0.2 whereas a 75% reached a ratio of 2.05 at an ER of 0.25. At these operating conditions the syngas lower heating value (LHV), SY, CGE and CCE were found to be 6.23 MJ/Nm3, 3.32 Nm3, 66.58%, 76.35% and 6.27 MJ/Nm3, 3.60 Nm3, 59.12%, 53.22% respectively. From these results it can be deduced that the air co-gasification is a promising technology for the sustainable production of energy from waste glycerol and plastic waste.

References

[1] Mishra, R., Shu, C.M., Gollakota, A.R.K. & Pan, S.Y ‘Unveiling the potential of pyrolysis-gasification for hydrogen-rich syngas production from biomass and plastic waste’, Energ. Convers. Manag. 2024: 118997 doi: 10.1016/j.enconman.2024.118997.

[2] Chunakiat,P., Panarmasar,N. & Kuchonthara, P. “Hydrogen Production from Glycerol and Plastics by Sorption-Enhanced Steam Reforming,” Ind. Eng. Chem. Res.2023; 62(49): 21057-21066. doi: 10.1021/acs.iecr.3c02072



COMPARATIVE AND STATISTICAL STUDY ON ASPEN PLUS INTERFACES USED FOR STOCHASTIC OPTIMIZATION

Josue Julián Herrera Velázquez1,3, Erik Leonel Piñón Hernández1, Luis Antonio Vega Vega1, Dana Estefanía Carrillo Espinoza1, Julián Cabrera Ruiz1, J. Rafael Alcántara Avila2

1Universidad de Guanajuato, Mexico; 2Pontificia Universidad Católica del Perú, Peru; 3Instituto Tecnológico Superior de Guanajuato, Mexico

New research on complex intensified schemes has popularized the use of multiple commercial process simulation software. The interfaces between software and computer systems for process optimization have allowed us to maintain rigor in the models. This type of optimization is mentioned in the literature as "Black Box Optimization" since successive evaluations are taken exploiting the information from the simulator without altering the model that integrates it. The writing/reading results are from the contribution of 1) Process simulation software, 2) Middleware protocol, 3) Wrapper protocol, and 4) Platform (IDE) with the optimization algorithm (Muñóz-López et al., 2017). The middleware protocol allows the automation of the process simulator and the transfer of information in both directions. The Wrapper protocol works to interpret the information transferred by the previous protocol and make it useful for both parties, for the simulator and the optimizer. Aspen Plus ® software has become popular due to the rigor of its models and the reliability of its results, as well as the customization it offers for different conventional and unconventional schemes. Few studies have been reported regarding the efficiency and effectiveness of the various computer systems where the programming of the optimization algorithm or the reported packages is carried out. Santos-Bartolome and Van-Gerven (2022) carried out the study of comparing different computer systems (Microsoft Excel VBA ®, MATLAB ®, Python ®, and Unity ®) with the Aspen Hysys ® software, evaluating the accuracy of communication, information exchange time, and the deviation of the results, reaching the conclusion that the best option is to use VBA ®. Ponce-Rocha et al. (2023) studied the execution time between Aspen Plus ® - MATLAB ® and Aspen Plus ® - Python ® connections in multi-objective optimization using the respective optimization packages, reaching the conclusion that the fastest connection occurs in the Python ® connection.

This work proposes to do a comparative study for the Aspen Plus ® software and its interfaces with Microsoft Excel VBA ®, Python ®, and MATLAB ®. 5 schemes are analyzed (conventional and intensified columns). The optimization of the Total Annual Cost is carried out by a modified Simulated Annealing Algorithm (m-SAA) (Cabrera-Ruiz et al., 2021). This algorithm has the same programming for all platforms, using the respective random number functions to make the study as homogeneous as possible. Each optimization is done ten times doing hypothesis testing to eliminate anomalous cases. The aspects to evaluate are the time per iteration, the standard deviation between each test and the number of feasible solutions. The results indicate that the best option to carry out the optimization is using the interface with VBA ®, however the one carried out with Python ® is not very different from this. There is not much literature on optimization algorithm packages in VBA ®, so, connecting with Python ® may be the most efficient and effective for performing stochastic optimization with Aspen Plus ® software in addition to being an open-source language.



3D simulation and design of MEA-based absorption system for biogas purification

Debayan Mazumdar, Wei Wu

National Cheng Kung University, Taiwan

The shape and geometry design of MEA-based absorption system by using ANSYS Fluent R22 is addressed. By conducting CFD studies for observing the effect of liquid distribution quality on counter current two-phase absorption under different liquid distributor designs. By simulation and analysis, the detailed exploration of fluid dynamics offers critical insights and enabling performance optimization. Unlike previous literature which focused on unstructured packing have been done on structure Mellapak 500X Packing. Demonstrating the overall efficiency for a MEA-based absorption system according to different distributor patterns. The previous model of calculation for liquid distribution quality is used for a detailed understanding between the initial layers of packing and pressure difference.



Enhancing Chemical Process Design: Aligning DEXPI Process with BPMN 2.0 for Improved Efficiency in Data Exchange

Shady Khella1, Markus Schichtel2, Erik Esche1, Frauke Weichhardt2, Jens-Uwe Repke1

1Process Dynamics and Operations Group, Technische Universität Berlin, Berlin, Germany; 2Semtation GmbH, Potsdam, Germany

BPMN 2.0 is a widely adopted standard across various industries, primarily used for business process management outside of the engineering sphere [1]. Its long history and widespread use have contributed to a mature ecosystem, offering advanced software tools for editing and optimizing business workflows.

DEXPI Process, a newly developed standard for early-phase chemical process design, focuses on representing Block Flow Diagrams (BFDs) and Process Flow Diagrams (PFDs), both crucial in the conceptual design phase of chemical plants. It provides a standardized way to document design activity, offering engineers a clear rationale for design decisions [2], which is especially valuable during a plant’s operational phases. While DEXPI Process offers a robust data model, it currently lacks an established serialization format for efficient data exchange. As Cameron et al. note in [2], finding a suitable format for DEXPI Process remains a key research area, essential for enhancing its usability and adoption. So far, Cameron et al. have explored several serialization formats for exchanging DEXPI Process information, including AutomationML, an experimental XML, and UML [2].

This work aims to map the DEXPI Process data model to BPMN 2.0, providing a standardized serialization for the newly developed standard. Mapping DEXPI Process to BPMN 2.0 also unlocks access to BPMN’s extensive software toolset. We investigate and validate the effectiveness of this mapping and the enhancements it brings to the usability of DEXPI Process through a case study based on the Tennessee-Eastman process, described in [3]. We then compare our approach with those of Cameron et al. in [2].

We conclude by presenting our findings and the key benefits of this mapping, such as improved interoperability and enhanced toolset support for chemical process engineers. Additionally, we discuss the challenges encountered during the implementation, including aligning the differences in data structures between the two models. Furthermore, we believe this mapping serves as a bridge between chemical process design engineers and business process management teams, unlocking opportunities for better collaboration and integration of technical and business workflows.

References:

[1] ISO. (2022). Information technology — Object Management Group Business Process Model and Notation. ISO/IEC 19510:2013. https://www.iso.org/standard/62652.html

[2] Cameron, D. B., Otten, W., Temmen, H., Hole, M., & Tolksdorf, G. (2024). DEXPI Process: Standardizing Interoperable Information for process design and analysis. Computers & Chemical Engineering, 182, 108564. https://doi.org/10.1016/j.compchemeng.2023.108564

[3] Downs, J. J., & Vogel, E. F. (1993). A plant-wide industrial process control problem. Computers & chemical engineering, 17(3), 245-255. https://doi.org/10.1016/0098-1354(93)80018-I



Linear and non-linear convolutional approaches and XAI for spectral data: classification of waste lubricant oils

Rúben Gariso, João Coutinho, Tiago Rato, Marco Seabra Reis

University of Coimbra, Portugal

Waste lubricant oil (WLO) is a hazardous residue that requires careful management. Among the available options, regeneration is the preferred approach for promoting a sustainable circular economy. However, regeneration is only viable if the WLO does not coagulate during the process, which can cause operational issues, possibly leading to premature shutdown for cleaning and maintenance. To mitigate this risk, a laboratory analysis using an alkaline treatment is currently employed to assess the WLO coagulation potential before it enters the regeneration process. Nevertheless, such a laboratory test is time-consuming, presents several safety risks, and its outcome is subjective, depending on visual interpretation by the analyst.

To expedite decision-making, process analytics technology (PAT) and machine learning were employed to develop a model to classify WLOs according to their coagulation potential. To this end, three approaches were followed, with a focus on convolutional methodologies spanning both linear and non-linear modeling structures. The first approach (benchmark) uses partial least squares for discriminant analysis (PLS-DA) (Wold, Sjöström and Eriksson, 2001) and interval partial least squares (iPLS) (Nørgaard et al., 2000) combined with standard spectral pre-processing techniques (27 model variants). The second approach applies the wavelet transform (Mallat, 1989) to decompose the spectra into multiple frequency components by convolution with linear filters, and PLS-DA for feature selection (10 model variants). Finally, the third approach consists of a convolutional neural network (CNN) (Yang et al., 2019) to estimate the optimal filter for feature extraction (1 model variant). These models were trained on real industrial data provided by Sogilub, the organization responsible for the management of WLO in Portugal.

The results show that the three modeling approaches can attain high accuracy, with an average accuracy of 91%. The development of the benchmark model requires an exhaustive search over multiple combinations of pre-processing filters since the optimal scheme cannot be defined a priori. On the other hand, implicit spectral filtering using wavelet transform convolution significantly lowers the complexity of the model development task, reducing the computational burden while maintaining the interpretability of linear approaches. The CNN was also capable of circumventing the pre-processing burden by implicitly estimating convolutional filters in the inner layers. Additionally, the use of explainable artificial intelligence (XAI) techniques demonstrated that the relevant features of the CNN model are in good accordance with the linear models. In summary, with an adequate level of expertise and effort, different approaches can provide similar prediction performances. However, the development process can be made faster, simpler, and computationally less demanding through a proper convolutional methodology, namely the one based on the wavelet transform.

References:

Mallat, S.G. (1989) IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7), pp. 674–693.

Nørgaard, L., Saudland, A., Wagner, J., Nielsen, J.P., Munck, L. and Engelsen, S.B. (2000) Applied Spectroscopy, 54(3), pp. 413–419.

Wold, S., Sjöström, M. and Eriksson, L. (2001) Chemometrics and Intelligent Laboratory Systems, 58(2), pp. 109–130.

Yang, J., Xu, J., Zhang, X., Wu, C., Lin, T. and Ying, Y. (2019) Analytica Chimica Acta, 1081, pp. 6–17.



Mathematical Modeling of Ammonia Nitrogen Dynamics in RAS Integrated with Bayesian Parameter Optimization

Lingwei Jiang1, Tao Chen1, Bing Guo2, Daoliang Li3

1School of Chemistry and Chemical Engineering, University of Surrey, United Kingdom; 2School of Sustainability, Civil and Environmental Engineering, University of Surrey, United Kingdom; 3National Innovation Center for Digital Fishery, China Agricultural University,China

The concentration of ammonia nitrogen is a critical parameter in aquaculture as excessive levels can be toxic to the aquatic animals, hampering their growth or even resulting in death. Therefore monitoring of ammonia nitrogen concentration in aquaculture is important for productivity and animal welfare. However, commercially available ammonia nitrogen sensors are expensive, have short lifespans thus requiring frequent maintenance, and can provide unreliable results during use. In contrast, sensors for other water quality parameters (e.g., temperature, dissolved oxygen, pH) are well-developed, accurate, and they could provide useful information to help predict ammonia nitrogen concentration through a mathematical model. In this study we present a new mathematical model for predicting ammonia nitrogen, combining fish bioenergetics with mass balance of ammonia nitrogen. We conduct a sensitivity analysis of the model parameters to identify the key ones and then use a Bayesian optimisation algorithm to calibrate these key parameters to data collected from a recirculating aquaculture system in our lab. We demonstrate that the model is able to give reasonable prediction of ammonia nitrogen on the experimental data not used in model calibration.



Computer-Aided Design of a Local Biorefinery Scheme from Water lily (Eichhornia Crassipes) to Produce Power and Bioproducts

Maria de Lourdes Cinco-Izquierdo1, Araceli Guadalupe Romero-Izquierdo2, Ricardo Musule-Lagunes3, Marco Antonio Martínez-Cinco1

1Universidad Michoacana de San Nicolás de Hidalgo, Facultad de Ingeniería Química, México; 2Universidad Autónoma de Querétaro, Facultad de Ingeniería, Mexico; 3Universidad Veracruzana, Instituto de Investigaciones Forestales, México

Lake ecosystems provide valuable services, such as vegetation and fauna, fertile soils, nutrient and climatic regulation, carbon sequestration, and recreation and tourism activities. Nevertheless, some are currently affected by high resource extraction, climatic change, or alien plant invasion (API), which causes the loss of local species and deterioration of ecosystem function. Regarding API, reports have identified 665 invasive exotic plants in México (IMTA, 2020), wherein the Water lily (Eichhornia crassipes) is highlighted due to its quick proliferation rate covering most national aquatic bodies. Thus, some strategies for controlling and using E. crassipes have been proposed (Gutiérrez et al., 1994). Specifically, after extraction, the water hyacinth biomass has been used as raw material for the production of several bioproducts and bioenergy; however, most of them have not covered the region's needs, and their economic profitability has not been reached. In this work, we propose a local biorefinery scheme to produce power and bioproducts from water lilies, using Aspen Plus V.10.0, per the needs of the Patzcuaro Lake community in Michoacán, Mexico. The scheme has been designed to process 197.6 kg/h of water lily, aligned to the extraction region schedule (COMPESCA, 2023), separating the biomass into two main compounds: root (RT, 47 wt % of total plant) and stems-leaves (S-L, 53 wt % of total plant). The power and steam are generated by RT flow (combustion process), while the S-L are separated in two fractions, 50 wt % for each one. The first fraction is the feedstock for an anaerobic digestion process operated to 35 °C to produce a fertilizer stream from the process sludge and biogas, which is converted to power using a turbine. On the other hand, the second fraction of S-L enters to drying equipment to reduce its moisture content; then, the dried biomass is divided in two processing zones: 1) pyrolysis to produce bio-oil, biochar, and high-temperature gases and 2) gasification to generate syngas, which is converted to power. According to the results, the total generated power is capable of covering all the electric requirements of the scheme, producing a super plus of 45 % regarding the total consumption; also, the system covers all heating requirements. On the other hand, fertilizer and biochar are helpful products for regional needs, improving the total annual cost (TAC) of the scheme.

References

COMPESCA. (2023, November 01). Comisión de Pesca del Estado de Michoacán. Informe anual de avances del Programa: Mantenimiento y Rehabilitación de Embalses.

Gutiérrez López, F. Arreguín Cortés, R. Huerto Delgadillo, P. Saldaña Fabela (1994). Control de malezas acuáticas en México. Ingeniería Hidráulica En México, 9(3), 15–34.

IMTA. (2020, July 15). Instituto Mexicano de Tecnología del Agua. Plantas Invasoras.



System analysis and optimization of replacing surplus refinery fuel gas by coprocessing with HTL bio-crude off-gas in oil refineries.

Erik Lopez Basto1,2, Eliana Lozano Sanchez3, Samantha Eleanor Tanzer1, Andrea Ramírez Ramírez4

1Department of Engineering Systems and Services, Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, the Netherlands; 2Cartagena Refinery. Ecopetrol S.A., Colombia; 3Department of Energy, Aalborg University, Aalborg, Denmark.; 4Department of Chemical Engineering, Faculty of Applied Sciences, Delft University of Technology, Delft, the

Sustainable production is a critical goal for the oil refining industry supporting the energy transition and reducing climate change impacts. This research uses Ecopetrol, Colombia's state-owned oil and gas company, and one of its high-complexity refineries (processing 11.45 Mtpd of crude oil) as a case study to explore CO2 reduction strategies. Decarbonizing refineries requires a combination of technologies, including low-carbon hydrogen (Low-C H2), sustainable energy, carbon capture, utilization, and storage (CCUS), bio-feedstocks, and product changes.

A key question addressed is the impact of CCUS on refinery performance and the potential to repurpose surplus refinery fuel gas while balancing techno-economic and environmental in the short and long-term goals.

Colombia’s biomass resources offer opportunities for advanced biofuel technologies like Hydrothermal Liquefaction (HTL), which produces bio-crude compatible with existing refinery infrastructure and off-gas with biogenic carbon that can be used in CCU processes. This research is grounded on the opportunity to utilize refinery fuel gas and HTL bio-crude off-gas in conversion processes to produce more valuable and sustainable products (see Figure 1 for the simplified system block diagram).

Our systems optimization approach, using mixed-integer linear programming (MILP) in Linny-R software, evaluates refinery operations and minimizes costs under CO2 emission constraints. Building on optimized low-C H2 and CCS systems (Lopez, E., et al. 2024), the first step assesses surplus refinery fuel gas, and the second screens CCU technologies, selecting steam reforming and autothermal reforming to convert fuel gas into methanol. HTL bio-crude off-gas is integrated into thermocatalytic processes for further methanol production, with techno-economic data sourced from literature and Aspen Plus simulations. Detailed techno-economic assessment presented in the work by Lozano, E., et al. (2024) is used as input for this study.

The objective function in the system analysis focuses on cost minimization while achieving specified CO2 reduction targets.

Results show that CCU technologies and surplus gas utilization can significantly reduce CO2 emissions, offering valuable insights into how refineries can contribute to global decarbonization efforts. Integrating biomass-derived feedstocks and CCU technologies provides a viable path for sustainable refinery operations, advancing the industry's role in a more sustainable energy future.

Figure 1. Simplified system block diagram

References

Lopez, E., et al. (2024). Assessing the impacts of low-carbon intensity hydrogen integration in oil refineries. Manuscript in press.

Lozano, E., et al. (2024). TEA of co-processing refinery fuel gas and biogenic gas streams for methanol synthesis. Manuscript submitted for publication in Escape Conference 2025.



Technical Assessment of direct air capture using piperazine in an advanced solvent-based absorption process

Shengyuan Huang, Olajide Otitoju, Yao Zhang, Meihong Wang

University of Sheffield, United Kingdom

CO2 emissions from power generation and industry increase the concentration of CO2 in the atmosphere to 422ppm, which generates a series of climate change and environmental problems. Carbon capture is one of the effective ways to mitigate global warming. Direct air capture (DAC), as one of the negative emission technologies, has great potential for commercial development to achieve capturing 980Mt CO2 in 2050 by IEA Net Zero Emissions Scenario.

DAC can be achieved through absorption using solvents, adsorption using solid adsorbents or a combination of both. This study is based on liquid phase DAC (L-DAC) because it requires smaller land requirement and specific energy consumption compared with other technologies, which is more suitable for large-scale commercial deployment. In the literature, MEA is widely used in DAC. However, use of MEA in DAC process has two big challenges: high energy consumption 6 to 8.8 GJ/tCO2 and high cost up to $340/tCO2. These are the barriers to prevent DAC deployment.

This research aims to study DAC using Piperazine (PZ) with different configurations and evaluate the technic and economic performance at large scale through process simulation. PZ as the new solvent could improve the absorption capacity and performance. The simulation is implemented in Aspen Plus®. The DAC process using PZ will be compared using simulation data from literature to ensure the model’s accuracy. Different configurations (e.g. standard configuration vs advanced flash stripper), different loadings and carbon capture levels will be studied to achieve better system performance and energy consumption performance. The research outcome from this study can be useful for process design by the industrial practitioners and also policymakers.

Acknowledgement: The authors would like to thank the financial support of the EU RISE project OPTIMAL (Grant Agreement No: 101007963).



TEA of co-processing refinery fuel gas and biogenic gas streams from thermochemical conversion for methanol synthesis

Eliana Lozano Sanchez1, Erik Lopez Basto2, Andrea Ramirez Ramirez2

1Aalborg University, Denmark; 2Delft University of Technology, The Netherlands

Heat decarbonization is a key strategy for fossil refineries to lower their emissions in the short/medium term. Direct electrification and other low carbon heat sources are expected to play a major role, however, current availability of refinery fuel gases (RFG) - mixture of residual gases rich in hydrocarbons used for on-site heat generation - may limit decarbonization if alternative uses for surplus RFG are not explored. Thus, evaluating RFG utilization options is key for refineries, while integration of renewable carbon sources remains crucial to decrease fossil crude dependance.

This study presents a techno-economic assessment of co-processing biogenic CO2 sources from biomass thermochemical conversion with RFG to produce methanol, a key chemical with high demand in industry and as shipping fuel. Hydrothermal liquefaction (HTL) and fast pyrolysis (FP) are the technologies evaluated due to their integration potential in a refinery context: these produce bio-oils with drop-in fuel potential that can use existing infrastructure and a by-product gas rich in CO2/CO to be co-processed with the RFG into methanol, which remains unexplored in literature and stands as the main contribution of this study.

The process is simulated in Aspen HYSYS assuming a fixed gas input of 25 tonne/h, which corresponds to estimated RFG surplus in a refinery case study after some emission reduction measures. The process comprises a reforming step to produce syngas (steam and autothermal reforming -SMR/ATR- are evaluated) followed by methanol synthesis via CO2/CO hydrogenation. The impact of gas co-processing is evaluated for increasing ratios of HTL/FP gas relative to the RFG baseline in terms of hydrogen requirement, carbon conversion to methanol, overall water balance and specific energy consumption.

Preliminary results indicate that the valorization of RFG using SMR allows for an increased share of biogenic gas up to 45 wt% without having a negative impact in the overall carbon conversion to methanol. SMR of the RFG results in a syngas with excess hydrogen, which makes possible to accommodate additional biogenic CO2 to operate at lower stoichiometric numbers without a negative impact in conversion and without additional H2 input, being this a key advantage of this integration. Although overall carbon conversion is not affected, the methanol throughput is reduced by 24-27 % relative to the RFG baseline due to the higher concentration of CO2 in the mix that lowers the carbon content and increases water production during methanol synthesis. The ATR case results in lower energy consumption but produces less hydrogen, limiting the biogenic gas share to only 7 wt% before requiring additional H2 for methanol synthesis.

This study aims to contribute to the discussions on integration of low carbon technologies into refinery operations, highlighting synergies between fossil and biobased feedstocks that expand the state-of-the art of co-processing of bio-feedstocks from thermochemical biomass conversion. Future results include the estimation of trade-offs between production costs and methanol carbon intensity, motivating the integration of these technologies in more comprehensive system analysis of fossil refineries and their net zero pathways.



Surrogate Model-Based Optimisation of Pressure-Swing Distillation Sequences with Variable Feed Composition

Laszlo Hegely, Peter Lang

Department of Building Services and Process Engineering, Faculty of Mechanical Engineering, Budapest University of Technology and Economics, Hungary

For separating azeotropic mixtures, special distillation methods must be used, such as pressure-swing (PSD), extractive or heteroazeotropic distillation. The advantage of PSD is that it does not require the addition of a new component. However, it can only be applied if the azeotrope is pressure-sensitive, and its energy demand can also be high. The configuration of the system depends on not only the type of the homoazeotrope but also the feed composition (z). If z is between the azeotropic compositions at the pressures of the columns, the feed can be introduced into either the low- (LP-HP sequence) or the high-pressure column (HP-LP sequence). Depending on z, one of the sequences will be optimal, whether with respect to energy demand or total annual cost (TAC).

Hegely et al. (2022) studied the separation of a maximum-boiling azeotropic mixture water-ethylenediamine by PSD where z (35 mol% water) was between the azeotropes at 0.1 and 2.02 bar. The TAC of both sequences was minimised without and with heat integration. The LP-HP sequence was more favourable at this composition. The optimisation was performed by two methods: a genetic algorithm (GA) and a surrogate model-based optimisation method (SMBO). By SMBO, algebraic surrogate models were fitted to simulation results by the ALAMO software (Cozad et al., 2014) and the resulting optimisation problem was solved. Different decomposition techniques were tested with the models fitted (1) to elements of TAC (heat duty of LPC, column diameters), (2) to capital and energy costs or (3) to TAC itself. The best results were achieved with the highest level of decomposition. Although TAC obtained by SMBO was lower than that of GA only once, the difference was always within 5 %.

In this work, our aim is to (1) improve the accuracy of surrogate models, thus, the performance of SMBO and (2) study the influence of z on the optimum of the two sequences, using the case study of Hegely et al. (2022). The first goal is achieved by fitting the models to the results of the single columns instead of the two-column system. Achieving the second goal requires repeated optimisation at different feed compositions, which would be very time-consuming with conventional optimisation methods. However, an advantage of SMBO is that z can be included as input variable of the models. This enables quickly finding the optimum for any feed composition.

The novelty of our work consists of determining the optimal PSD system as a function of the feed composition by SMBO. Additionally, this is the first work that uses ALAMO to fit the models to be used in the optimisation to the individual columns.

References

Cozad A., Sahinidis N.V., Miller D.C., 2014. Learning surrogate models for simulation-based optimization. AIChE Journal, 60, 2211–2227.

Hegely L., Karaman Ö.F., Lang P., 2022, Optimisation of Pressure-Swing Distillation of a Maximum-Azeotropic Mixture with the Feed Composition between the Azeotropes. In: Klemeš J.J., Nižetić S., Varbanov P.S. (eds.) Proceedings of the 25th Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction. PRES22.0188.



Unveiling Probability Histograms from Random Signals using a Variable-Order Quadrature Method of Moments

Menwer Attarakih1, Mark W. Hlawitschka2, Linda Al-Hmoud1, Hans-Jörg Bart3

1The University of Jordan, Jordan, Hashemite Kingdom of; 2Johannes Kepler University; 3RPTU Kaiserslautern

Random signals play crucial role in chemical and process engineering where industrial plants collect and analyse big data for process understanding and decision-making. This requires unveiling the underlying probability histogram from process random signals with a finite number of bins. Unfortunately, finding the optimal number of bins is still based on empirical optimization and general rules of thumb (e.g. Scott and Freedman formula). The disadvantages here are the large number of bins that maybe encountered, and the inconsistency of the histogram with low-order moments of the true data.

In this contribution, we introduce an alternative and general method to unveil the probability histograms based on the Quadrature Method Of Moments (QMOM). As being data compression method, it works using the calculated moments of an unknown weight probability density function. Because of the ill-conditioned inverse moment problem, there is no simple and general inversion algorithm to recover the unknown probability histogram which is usually required in many design applications and real time online monitoring (Thibault et al., 2023). Our method uses a novel variable integration order QMOM which adapts automatically depending on the relevance of the information contained in the random data. The number of bins used to recover the underlying histogram increases as the information entropy does. In the hypothetical limit where the data has zero information entropy, the number of bins is reduced to one. In the QMOM realm, the number of bins is explored in an evolutionary algorithm that assigns the nodes in an optimal manner to sample the unknown function or process from which the data is generated. The algorithm terminates when no more important information is available for assignment to the newly created node up to a user predefined threshold. If the date is coming from a dynamic source with varying mean and variance, the boundaries of the bins will move dynamically to reflect the nature of the data.

The application of the method is applied to many case studies including moment-consistent histogram unveiled from monthly mean maximum air to surface temperature in Amman city from 1901 to 2023 using only 13 bins with a bimodal histogram. In another case study, the diastolic and systolic blood pressure measurements are found to follow a normal distribution histogram using a data series spanning a six-year period with 11 bins. In a unique dynamic case study, batch particle aggregation plus growth is simulated based on an initial 11 bins where the simulation ends with 14 bins after 5 seconds simulation time. The result is a histogram which is consistent with 28 low-order moments. In addition to this, measured droplet distribution from a pilot plant sparger of toluene in water is found to follow a normal distribution histogram with 11 bins.

As a main conclusion, our method is a universal histogram reconstruction method which only needs enough number of moments to work with intensive validation using real-life problems.

References

E. Thibault, Chioua, M., McKay, M., Korbel, M., Patience, G. S., Stuart, P. R. (2023), Cand. J. Chem. Eng., 101, 6055-6078.



Sensitivity Analysis of Key Parameters in LES-DEM Simulations of Fluidized Bed Systems Using generalized polynomial chaos

Radouan Boukharfane, Nabil El Mocayd

UM6P, Morocco

In applications involving fine powders and small particles, the accuracy of numerical simulations—particularly those employing the Discrete Element Method (DEM) for predicting granular material behavior—can be significantly impacted by uncertainties in critical parameters. These uncertainties include coefficients of restitution for particle-particle and particle-wall collisions, viscous damping coefficients, and other related factors. In this study, we utilize stochastic expansions based on point-collocation non-intrusive polynomial chaos to conduct a sensitivity analysis of a fluidized bed system. We consider four key parameters as random variables, each assigned a specific probability distribution over a designated range. This uncertainty is propagated through high-fidelity Large Eddy Simulation (LES)-DEM simulations to statistically quantify its impact on the results. To effectively explore the four-dimensional parameter space, we analyze a comprehensive database comprising over 1200 simulations. Notably, our findings reveal that variations in the particle and wall Coulomb friction coefficients exert a more pronounced influence on streamwise particle velocity than do variations in the particle and wall normal restitution coefficients.



An Efficient Convex Training Algorithm for Artificial Neural Networks by Utilizing Piecewise Linear Approximations and Semi-Continuous Formulations

Ece Serenat Koksal1, Erdal Aydin1, Metin Turkay2,3

1Department of Chemical and Biological Engineering, Koç University, Turkiye; 2Department of Industrial Engineering, Koç University, Turkiye; 3SmartOpt, Turkiye

Artificial neural networks (ANNs) are mathematical models representing the relationships between inputs and outputs, inspired by the structure of neuron connections in the human brain. ANNs consist of input and output layers, along with user-defined hidden layers containing neurons, which are interconnected through activation functions such as rectified linear unit (ReLU), hyperbolic tangent and sigmoid. A feedforward neural network (FNN) is a type of ANN that propagates information in one direction, from input to output. ANNs are widely used as data-driven approaches, especially for complex systems like chemical engineering, where mechanistic modelling poses significant challenges. However, they often encounter issues such as overfitting, insufficient data, and suboptimal training.

To address suboptimal training, piecewise linear approximations of nonlinear activation functions, such as sigmoid and hyperbolic tangent, can be employed. This approach may enable the transformation of the non-convex problem into a convex one, enabling training via a special ordered set type II (SOS2) formulation at the same time (Koksal & Aydin, 2023; Sildir & Aydin, 2022). The resulting formulation is a mixed-integer linear programming (MILP) problem. However, as the number of neurons, number of approximation pieces or dataset size increases, the computational time rises due to the exponential complexity increase associated with binary variables, hyperparameters and data points.

In this work, we propose a novel training algorithm for FNNs by employing SOSX variables, as defined by Keha et al., (2004) instead of the conventional SOS2 formulation. By modifying the branching algorithm, we transform the MILP problem into subsets of linear programming (LP) problems. This transformation also brings about parallelizable properties, which may further reduce the computational time for training the FNNs. Results demonstrate that this change in the branching strategy significantly reduces computational time, making the formulation more efficient for convexifying the FNN training process.

References

Keha, A. B., De Farias, I. R., & Nemhauser, G. L. (2004). Models for representing piecewise linear cost functions. Operations Research Letters, 32(1), 44–48. https://doi.org/10.1016/S0167-6377(03)00059-2

Koksal, E. S., & Aydin, E. (2023). Physics Informed Piecewise Linear Neural Networks for Process Optimization. Computers and Chemical Engineering, 174. https://doi.org/10.1016/j.compchemeng.2023.108244

Sildir, H., & Aydin, E. (2022). A Mixed-Integer linear programming based training and feature selection method for artificial neural networks using piece-wise linear approximations. Chemical Engineering Science, 249. https://doi.org/10.1016/j.ces.2021.117273



Economic evaluation of Solvay processes for sodium bicarbonate production with brine and carbon tax considerations

Dina Ewis, Zeyad Ghazi, Sabla Y. Alnouri, Muftah H. El-Naas

Gas Processing Center, College of Engineering, Qatar University, Doha, Qatar

Reject brine discharge and high CO2 emissions from desalination plants are major contributors to environmental pollution. Managing reject brine involves significant costs, mainly due to the energy-intensive processes required for brine dilution and disposal. In this context, Solvay process represents a mitigation scheme that can effectively reduce reject brine salinity and sequestering CO2 while producing sodium bicarbonates simultaneously. The Solvay process represents a combined approach that can effectively manage reject brine and CO2 in a single reaction while producing an economically feasible product. Therefore, this study reports a systematic techno-economics assessment of conventional and modified Solvay processes, while incorporating brine and carbon tax. The model evaluates the significance of implementing a brine and CO2 tax on the economics of conventional and Ca(OH)2 modified Solvay compared to industries expenditures on brine dilution and treatment before discharge to the sea. The results show that the conventional Solvay process becomes profitable after applying a brine tax of 1 dollar per meter cube of brine and a CO2 tax of 42 dollars per tonne CO2 —both figures lower than the current costs associated with brine treatment and existing carbon taxes. Moreover, the profitability of the Ca(OH)₂-modified Solvay process increases even further with minimal brine and CO₂ taxes. The findings highlight the significance of adopting modified Solvay process as an integrated solution for sustainable brine management and carbon capture.



THE GREEN HYDROGEN SUPPLY CHAIN IN THE BRAZILIAN STATE OF BAHIA: A DETERMINISTIC APPROACH

Leonardo Santana1, Gustavo Santos1, Pessoa Fernando1, Barbosa-Póvoa Ana Paula2

1SENAI CIMATEC university center, Brazil; 2Instituto Superior Técnico – Universidade de Lisboa, Portugal

Hydrogen is increasingly recognized as a pivotal element in decarbonizing energy, transport, chemical industry, and agriculture sectors. However, significant technological challenges related to production, transport, and storage hinder its broader integration into these industries. Overcoming these barriers requires the development of a sustainable hydrogen supply chain (HSC). This paper aims to design and plan a HSC by developing a Mixed-Integer Linear Programming (MILP) for the Brazilian state of Bahia, the fifth largest state of Brazil (as big as France), a region with significant potential for sustainable electricity and electrolytic hydrogen production. The case study utilizes existing road infrastructure, liquefied and compressed hydrogen via trucks or trains are considered. A monetization strategy is employed to consolidate both economic and environmental aspects into a single objective function, translating CO2 emissions into costs using carbon credit prices. Facility locations are selected based on the preference locations for hydrogen production from Bahia’s government report, considering four dimensions: economic, social, environmental, and technical. Wind power, solar PV, and grid electricity are considered energy sources for hydrogen production facilities, and the model aims to select the optimal combination of energy sources for each plant. The outcomes include the selection of specific hydrogen production plants to meet the demand center's requirements, alongside decisions regarding the preferred form of hydrogen storage (liquefied or compressed) and the optimal energy source (solar, wind, or grid) for each facility. This model provides a practical contribution to the implementation of a sustainable green hydrogen supply chain in Bahia, focusing on the industrial sector's needs. The study offers a replicable and accessible computational approach to solving complex supply chain problems, especially in regions with growing interest in green hydrogen production.



A combined approach to optimization of soft sensor architecture and physical sensor configuration

Lukas Furtner1, Isabell Viedt1, Leon Urbas1,2

1Process Systems Engineering Group, TU Dresden, Germany; 2Chair of Process Control Systems, TU Dresden, Germany

In the chemical industry, soft sensors are deployed to reduce equipment cost or allow for a continuous measurement of process variables. Soft sensors monitor parameters not via physical sensors but infer them from other process variables, often by means of parametric equations like balances and thermodynamic or kinetic dependencies. Naturally, the precision of soft sensors is affected by the uncertainty of their input variables. This paper proposes a novel approach to automatically identify the most precise soft sensor based on a set of process system equations and the configuration of physical sensors in the chemical plant. Furthermore, the method assesses the benefit of deploying additional physical sensors to increase a soft sensor’s precision. This enables engineers to derive adjustments of the existing sensor configuration in a chemical plant. Based on approximating the uncertainty of soft sensors to infer a critical process variable via Monte Carlo simulation, the proposed method is insusceptible against dependent, non-Gaussian uncertainties. Additionally, the approach allows to incorporate hybrid semi-parametric soft sensors [1], modelling poorly understood effects and dependencies within the process system with data-driven, non-parametric parts. Applied to the Tennessee Eastman process [2], the method identifies Pareto-optimal sensor configurations, considering sensor cost and monitoring precision for critical process variables. Finally, the method's deployment in real-world chemical plants is discussed.

Sources
[1] J. Sansana et al., “Recent trends on hybrid modeling for Industry 4.0,” Computers & Chemical Engineering, vol. 151, p. 107365, Aug. 2021
[2] J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers & Chemical Engineering, vol. 17, no. 3, pp. 245–255, Mar. 1993



Machine Learning Models for Predicting the Amount of Nutrients Required in a Microalgae Cultivation System

Geovani R. Freitas1,2,3,4, Sara M. Badenes3, Rui Oliveira4, Fernando G. Martins1,2

1Laboratory for Process Engineering, Environment, Biotechnology and Energy (LEPABE); 2Associate Laboratory in Chemical Engineering (ALiCE); 3Algae for Future (A4F); 4LAQV-REQUIMTE

Effective prediction of nutrient demands is crucial for optimising microalgae growth, maximising productivity, and minimising resources waste. With the increasing amount of data related to microalgae cultivation systems, data mining (DM) and machine learning (ML) methods to extract additional knowledge has gained popularity over time. In the DM process, models can be evaluated using ML algorithms, such as random forest (RF), artificial neural network (ANN) and support vector regression (SVR). In the development of these ML models, data preprocessing stage is necessary due to the poor quality of data. While cleaning and outlier removal techniques are employed to eliminate missing data or outliers, normalization is used to standardize features, ensuring that no single feature is more relevant to the model due to differences in scale. After this stage, feature selection is employed to identify the most relevant parameters, such as solar irradiance and initial dry weight of biomass. Once the optimal features are identified, data splitting and cross-validation strategies are employed to ensure that the models are trained and evaluated with representative subsets of the dataset. Proper data splitting into training and testing sets prevents overfitting, allowing the models to generalize effectively to new, unseen data. Cross-validation techniques, such as k-fold and repeated k-fold cross-validation, are used to rigorously test model performance across multiple iterations, ensuring that results are not dependent on any single data partition. Principal component analysis (PCA) can also be applied as a dimensionality reduction technique to simplify complex environmental datasets by reducing the number of variables or features in the data while retaining as much information as possible. To further improve prediction capabilities, ensemble methods are incorporated, leveraging multiple models to achieve a higher overall performance. Stacking, a popular ensemble technique, is used to combine the outputs of individual models, such as RF, ANN, and SVR, into a single meta-model. This approach takes advantage of the strengths of each base model, such as the non-linear mapping capabilities of ANN, the robustness of RF against overfitting, and the effectiveness of SVR in handling complex feature interactions. By combining these diverse models, the stacked ensemble method provides more accurate and reliable predictions of nutrient requirements. The application of these ML techniques has been demonstrated using a dataset acquired from the cultivation of the microalgae Dunaliella in a flat-panel photobioreactor (FP-PBR). The results showed that the data mining workflow, in combination with different ML models, was able to describe the nutrients requirements to obtain a good performance of microalgae Dunaliella production in carotenogenic phase, for b-carotene production, in a FP-PBR system.



Dynamical modeling of ultrafine particle classification in tubular bowl centrifuges

Sandesh Athni Hiremath1, Marco Gleiss2, Naim Bajcinca1

1RPTU Kaiserslautern, Germany; 2KIT Karlsruhe, Germany

Ultrafine or colloidal particles are widely used in industry as aerogels, coatings, filtration aids or thin films and require a defined particle size. For this purpose tubular centrifuges are suitable for particle separation and classification due to the high g-forces. The design and optimization of tubular centrifuges requires a large number of pilot tests, which is time-consuming and costly. Additionally, the centrifuge while operating semi-continuously under constant process conditions, results in temporal changes of particle size distribution and solids volume fraction especially at the outlet. Altogether, these aspects makes the task of designing an efficient centrifuge challenging. This work presents a dynamic model for the real-time simulation of the behavior during particle classification in a pilot-scale tubular centrifuge and also provide a novel data-driven algorithm for model validation. The combination of the two greatly facilitates the design and control of the centrifugation process, in particular the tubular centrifuge being considered. First, we discuss the new continuous mathematical model as an improvement over the previously published multi-compartment (discrete) model by Winkler et al. [1]. Based on simulation we show the influence of operational conditions and material behavior on the classification of a colloidal silica-water slurry. Subsequently, we validate the dynamical model by comparing experimental data with the simulations for the temporal change of product loss, grade efficiency and sediment build-up. For validation, we propose a new data driven method which uses neural-odes that incorporates the proposed new centrifugation model thus capable of encoding the physical (transport) laws in the network parameters. In summary, our work provides the following novelties:

1. A continuous dynamical model for a tubular centrifugation process that establishes a strong foundation for continuous and semi-continuous control of the process.

2. A new data-driven validation algorithm that not only allows the use of physics based continuous model thus serving as a base methodology for developing a full-fledged learning based observer model which can be used as a state-estimator during continuous process control.

[1] Marvin Winkler, Frank Rhein, Hermann Nirschl, and Marco Gleiss. Real-time modeling of volume and form dependent nanoparticle fractionation in tubular centrifuges. Nanomaterials, 12(18):3161, 2022.



Towards a multi-scale process optimization coupling custom models for unit operations, process simulator, and environmental impact.

Thomas Hietala1, Sonja Herres-Pawlis2, Pedro S.F. Mendes1

1Centro de Química Estrutural, Instituto Superior Técnico, Portugal; 2Institute of Inorganic Chemistry, RWTH Aachen University, Germany

To achieve utmost process efficiency, all scales, from phenomena within a given unit operation to mass and energy integration, matter. For instance, the way mass transfer and kinetics are optimized in a chemical reactor (e.g., focusing either on activity or selectivity) will impact the downstream separation train and, thus, the process as a whole. Currently, as the design of sustainable processes is mostly performed independently at different scales, the overall influence of design choices at different levels is not assessed in a seamless way, leading to a trial-and-error and inefficient design workflow. In order to consider all scales simultaneously, a multi-scale model has been developed that couples a process model to a complex mass-transfer limited reactor model and to an environmental and/or social impact assessment tool. The production of Polylactic-acid (PLA), the most produced bioplastic to date[1], was chosen as the case study for the development of this tool.

The multi-scale model covers, as of today, the reactor, process and environmental analysis scales. The process model simulating the production process of PLA was developed in Aspen Plus simulation software employing the Polymer Plus module and PolyNRTL as the thermodynamic method, based on literature implementation[2]. The production process consists firstly of an oligomerization reaction step of lactic acid to a PLA pre-polymer. It is followed by a depolymerization step which converts the pre-polymer into lactide. After a purification step, the lactide forms the high molecular weight PLA in a ring-opening polymerization step. The PLA product is obtained after a final purification step. The depolymerization step, in industry, is performed in a devolatilization equipment, which is a mass-transfer limited reactor. As there are no adequate mass-transfer limited reactor models in Aspen Plus, a Python CAPE-Open Unit Operation module[3] was developed to couple a realistic devolatilization reactor model into the process model. If mass-transfer would not be accounted for in the reactor, the ultimate PLA production would be underestimated by 8-times, with the corresponding impact on profitability and environmental.

From the process model, the economic performance of the process can be determined. To determine the environmental performance of the designed process simultaneously and seamlessly, a Life Cycle Analysis (LCA) model, performed in OpenLCA software, is coupled with Aspen Plus using an interface coded in Python. With this multi-scale model built-in, the impact of the design variables at the various scales on the process's overall economic and environmental performance can be determined and optimized.

This multi-scale model creates a basis to develop a multi-objective optimization framework using economic and environmental objective functions directly from Aspen Plus and OpenLCA software. This could enable a reduction in the environmental impact of processes without disregarding the profitability of the process.

[1] - European Bioplastics, Bioplastics market data, 2023, https://www.european-bioplastics.org/news/publications/ accessed on 25/09/2024

[2] - K. C. Seavey and Y. A. Liu, Step-growth polymerization process modeling and product design. New Jersey: Wiley, 2008

[3] - https://www.colan.org/process-modeling-component/python-cape-open-unit-operation/ accessed on 25/09/2025



Enhancing hydrodynamics simulations in Distillation Columns Using Smoothed Particle Hydrodynamics (SPH)

RODOLFO MURRIETA-DUEÑAS1, JAZMIN CORTEZ-GONZÁLEZ1, ROBERTO GUTIÉRREZ-GUERRA2, JUAN GABRIEL SEGOVIA-HERNÁNDEZ3, CARLOS ENRIQUE ALVARADO-RODRÍGUEZ3

1TECNOLÓGICO NACIONAL DE MÉXICO/ CAMPUS IRAPUATO, MÉXICO; 2UNIVERSIDAD DE GUANAJUATO, MÉXICO; 3UNIVERSIDAD TECNOLÓGICA DE LEÓN, CAMPUS LEÓN, MÉXICO

Distillation is one of the most widely applied unit operations in chemical engineering, renowned for its effectiveness in product purification. However, traditional distillation processes are often hampered by significant inefficiencies, driving efforts to enhance thermodynamic performance in both equipment design and operation. While many alternatives have been evaluated using MESH equations and sequential simulators, comparatively less attention has been given to Computational Fluid Dynamics (CFD) modeling, largely due to its complexity. CFD methodologies typically fall under either Eulerian or Lagrangian approaches. The Eulerian method relies on a mesh to discretize the medium, providing spatial averages at the fluid interfaces. Popular techniques include finite volume and finite element methods, with finite volume commonly employed to simulate the hydrodynamics, mass transfer, and momentum in distillation columns (Haghshenas et al., 2007; Lavasani et al., 2018; Zhao, 2019; Ke, 2022). Despite its widespread use, the Eulerian approach faces challenges such as interface modeling, convergence issues, and selecting appropriate turbulence models for simulating turbulent flows. In contrast, Lagrangian methods, which discretize the continuous medium using non-mesh-based points, offer detailed insights into interfacial phenomena. Among these, Smoothed Particle Hydrodynamics (SPH) stands out for its ability to model discontinuous media and complex geometries without requiring a mesh, making it ideal for studying various systems, including microbial growth (Martínez-Herrera et al., 2022), sea wave dynamics (Altomare et al., 2023), and stellar phenomena (Reinoso et al., 2023). This versatility and robustness make SPH a promising tool for distillation process modeling. In this study, we present a numerical simulation of a liquid-vapor (L-V) thermal equilibrium stage in a plate distillation column, employing the SPH method. The focus is on Sieve and Bubble cap plates, with periodic temperature conditions applied to facilitate thermal equilibrium. Column sizing was performed using Aspen One for an equimolar Benzene-Toluene mixture, operating under conditions ensuring a condenser cooling water temperature of 120°F. The Chao-Seader thermodynamic model was applied, with both sieve and bubble plates integrated into a ten-stage column. Stage 5 was designated as the feed stage, and a 98% purification and recovery rate for both components was assumed. This setup provided critical operational parameters, including liquid and vapor velocities, viscosity, density, pressure, and column diameter. Three-dimensional CAD models of the distillation column and the plates were generated using SolidWorks and subsequently imported into DualSPHysics (Domínguez et. al., 2022) for CFD simulation. Stages 6 and 7 were selected for detailed analysis, as they are positioned just below the feed stage. The results showed that the sieve plate achieved thermal equilibrium more rapidly than the bubble cap plate, a difference attributable to the steam injection zone in the bubble cap design. Moreover, the simulations allowed the calculation of heat transfer coefficients based on plate geometry, providing insights into heat exchange at the fluid interfaces. In conclusion, this study highlights the potential of using periodic temperature conditions to simulate thermal equilibrium in distillation columns. Additionally, the SPH method has demonstrated its utility as a powerful and flexible tool for simulating fluid dynamics and thermal equilibrium in distillation processes.



Electric arc furnace dust waste management: A process synthesis approach.

Agustín Porley Santana1, Mayra Doldan1,2, Martín Duarte Guigou2,3, Mauricio Ohanian1, Soledad Gutiérrez Parodi1

1Instituto de Ingeniería Química, Facultad de Ingeniería, Universidad de la República, Montevideo, 11300, Uruguay; 2Viento Sur Ingeniería, Ruta 61, Km 19, Nueva Helvecia, Colonia, Uruguay.; 3Grupo de Ingeniería de Materiales, Inst. Tecn. Reg. Sur-Oeste, Universidad Tecnológica del Uruguay, Horacio Meriggi 905, CP60000, Paysandú, Uruguay.

The residue from the solid collection system of steel mills is known as electric arc furnace dust (EAFD). It contains significant amounts of iron, zinc, and lead in the form of oxides, silicates, and carbonates, along with minor components such as chromium, tin, nickel, and cadmium. Therefore, most countries classify this waste as hazardous waste. Its management presents scientific and technical challenges that significantly impact the economics of the steelmaking process.

Currently, the management of this waste consists of burying it in the final disposal plant. However, there are multiple treatment alternatives to reduce its hazardousness by recovering and immobilizing marketable heavy metals such as Zn and Pb. This process can be carried out through a hydrometallurgical dissolution with selective extraction of Zn, leaving the rest of the metals components in the solid. Zn has amphoteric properties, but it shares this characteristic with Pb, so alkaline extraction solubilizes both metals simultaneously, leaving iron compounds in an insoluble form. At this stage, two currents result, one solid and one liquid. The liquid stream is a zinc-rich solution from which Zn could be electrochemically recovered as a valuable product, ensuring that the electrodeposited material shows characteristics that allow for easy recovery through mechanical means. The solid stream can be stabilized by incorporating it into an alkali-activated inorganic polymer (geopolymer) to obtain a product or waste that captures the heavy metals, immobilizing them, or it can be managed by a third party. To avoid lead contamination of the product of interest (pure Zn), the liquid stream can go through a precipitation process with sodium sulfide, removing the lead as lead sulfide or electrodepositing pure lead by controlling (the voltage or current) before electrodepositing the Zn in a subsequent stage. Pilot-scale testing of these processes has been conducted previously. [1]

Each step generates different costs and alternatives for managing this residue. For this, the process synthesis approach is considered suitable, allowing for the simultaneous analysis of these alternatives and the selection of the one that generates the greatest benefit.

This work studies the management of steel mill residue with a process synthesis approach combining experimental data from pilot-scale operations, data collected from metallurgical companies, and data based on expert judgment. The stages to achieve this objective involve: superstructure conception, its translation into mathematical language, and implementation in a mathematical programming software (GAMS). The aim is to assist in decision-making at the managerial level, so the objective function chosen was to maximize commercial value per ton of EAFD to be managed. A superstructure model is proposed that combines binary variables for operations and binary variables for artificial streams, enabling accurate modeling of the various connections involved in this process management network. Artificial streams were used to formally describe disjunctions. Sensitivity analyses are currently being conducted.

References

[1] M.Doldán, M. Duarte Guigou, G. Pereira, M. Ohanian, Electrodeposition of Zinc and Lead from Electric Arc Furnace Dust Dissolution: A Kinetic Study in A Closer Look at Chemical Kinetics, Editorial Nova Science Publishers 2022



Network theoretical analysis of the reaction space in biorefineries

Jakub Kontak, Jana Marie Weber

Intelligent Systems Department, Delft University of Technology, Netherlands

Abstract

Large chemical reaction space has been analysed intensively to learn the patterns of chemical reactions (Fialkowski et al., 2005; Jacob & Lapkin, 2018; Llanos et al., 2019, Mann & Venkatasubramanian, 2023) and to understand the wiring structure to be used for network pathway planning problems (Weber et al., 2019; Ulonska et al., 2016). With increasing pressure towards more sustainable production systems, it becomes worthwhile to model the reaction space reachable from biobased feedstocks, e.g. through integrated processing steps in biorefineries.

In this work we focus on a network-theoretical analysis of biorefinery reaction data. We obtain biorefinery reaction data from the REAXYS web interface, propose a directed all-to-all mapping between reactants and products for comparability purposes with related work, and finally compare the reaction space obtained from biorefineries with the network of organic chemistry (NOC) (Jacob & Lapkin, 2018). Our findings indicate that despite having 1000 times fewer molecules, the constructed network resembles the NOC in terms of its scale-free nature and shares similarities regarding its “small-world” property. Our results further suggest that the biorefinery network space has a higher centralisation and clustering coefficient. Additionally, we inspect the coverage rate of our data querying strategy and find that our network covers most of common second and third intermediates, yet only few biorefinery end-products and direct feedstock molecules are present.

References

Fialkowski, M., Bishop, K. J., Chubukov, V. A., Campbell, C. J., & Grzybowski, B. A. (2005). Architecture and evolution of organic chemistry. Angewandte Chemie International Edition, 44(44), 7263-7269.

Jacob, P. M., & Lapkin, A. (2018). Statistics of the network of organic chemistry. Reaction Chemistry & Engineering, 3(1), 102-118.

Llanos, E. J., Leal, W., Luu, D. H., Jost, J., Stadler, P. F., & Restrepo, G. (2019). Exploration of the chemical space and its three historical regimes. Proceedings of the National Academy of Sciences, 116(26), 12660-12665.

Mann, V., & Venkatasubramanian, V. (2023). AI-driven hypergraph network of organic chemistry: network statistics and applications in reaction classification. Reaction Chemistry & Engineering, 8(3), 619-635.

Weber, J. M., Lió, P., & Lapkin, A. A. (2019). Identification of strategic molecules for future circular supply chains using large reaction networks. Reaction Chemistry & Engineering, 4(11), 1969-1981.

Ulonska, K., Skiborowski, M., Mitsos, A., & Viell, J. (2016). Early‐stage evaluation of biorefinery processing pathways using process network flux analysis. AIChE Journal, 62(9), 3096-3108.



Applying Quality by Design to Digital Twin Supported Scale-Up of Methyl Acetate Synthesis

Jessica Ebert1, Amy Koch1, Isabell Viedt1,3, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Chair of Process Control Systems; 3TUD Dresden University of Technology, Process-to-Order Lab

The scale-up from lab to production scale is an essential cost and time factor in the development of chemical processes, especially when high demands are placed on product quality. Quality by Design is a common method used in the pharmaceutical industry to ensure product quality through the production process (Yu et al., 2014), which is why the QbD methodology could be a useful tool for the process development in chemical industry as well. Concepts from literature demonstrate how mechanistic models are used for the direct scale-up from laboratory equipment to production equipment by dispensing with intermediate scales in order to shorten the time to process (Furrer et al., 2021). The integration of Quality by Design into a direct scale-up approach promises further advantages, such as a deeper process understanding and the assurance of process safety. Digital twins consisting of simulation models digitally represent the behavior of plants and the processes running on it, enable the model-based scale-up.

In this work a simulation-based workflow for the digital twin supported scale-up of processes and process plants is proposed, which integrates various aspects of the quality by design methodology. The key element is the determination of the design space defining Critical Quality Attributes and identifying Critical Process Parameters as well as Critical Material Attributes (Yu et al., 2014). The design space is transferred from the laboratory scale model to the production scale model. To illustrate the concept, the workflow is implemented for the use case of the synthesis of methyl acetate. The process is scaled from a 2 L laboratory stirred tank reactor to a 50 L production plant fulfilling each step of the scale-up workflow: modelling, definition of the target product quality, experiments, model adaption, parameter transfer and design space identification. Thereby, the presentation of the results focusses on the design space identification and transfer using global system analysis. Finally, benefits and limitations of the implementation of Quality by Design in the direct scale-up using digital twins are discussed.

References

Schindler, Polyakova, Harding, Weinhold, Stenger, Grünewald & Bramsiepe (2020). General approach for technology and Process Equipment Assembly (PEA) selection in process design. Chemical Engineering and Processing – Process(159), Article 108223.

T. Furrer, B. Müller, C. Hasler, B. Berger, M. Levis & A. Zogg (2021). New Scale-up Technologies for Hydrogenation Reactions in Multipurpose Pharmaceutical Production Plants. Chimia(75), Article 11.

X.. L.Yu, G. Amidon, M. A. Khan, S. W. Hoag, J. Polli, G.K. Raju & J. Woodcock (2014). Understanding Pharmaceutical Quality by Design. The AAPS Journal(16), 771–783.



Digital Twin supported Model-based Design of Experiments and Quality by Design

Amy Koch1, Jessica Ebert1, Isabell Viedt1,2, Andreas Bamberg4, Leon Urbas1,2,3

1TUD Dresden University of Technology, Process Systems Engineering Group; 2TUD Dresden University of Technology, Process-to-Order Lab; 3TUD Dresden University of Technology, Chair of Process Control Systems; 4Merck Electronics KGaA, Frankfurter Str. 250, Darmstadt 64293, Germany

In the specialty chemical industries, faster time-to-process is a significant measure of success. One key aspect which supports faster time-to-process is reducing the time required for experimental efforts in the process development phase. Here, Digital Twin workflows based on methods such as global system analysis, model-based design of experiments (MBDoE), and the identification of the design space as well as leveraging prior knowledge of the equipment capabilities can be utilized to reduce the experimental load (Koch et al., 2023). MBDoE utilizes prior knowledge (model structure & initial parameter estimates) to optimally design an experiment by identification of optimum process conditions, thereby reducing experimental effort (Franceschini & Macchietto, 2008). Further benefit can be achieved by applying Quality by Design methods (Katz & Campbell, 2012) to these Digital Twin workflows; here, the prior knowledge supplied by the Digital Twin is used to pre-screen combinations of critical process parameters and model parameters to identify suitable parameter combinations for inclusion in the MBDoE optimization problem (Mädler, 2023). In this paper, first a Digital Twin workflow based on incorporating prior knowledge of equipment capabilities into global system analysis and subsequent MBDoE is presented and relevant methodology explained. This workflow is illustrated with a prototypical implementation using the process simulation tool gPROMS for the specific use case of an esterification process in a stirred tank reactor. As a result, benefits such as improved parameter estimation and reduced experimental effort compared to traditional DoE are illustrated as well as a critical evaluation of the applied methods.

References

G. Franceschini & S. Macchietto (2008). Model-based design of experiments for parameter precision: State of the art. Chemical Engineering Science 63(19), 4846-4872. Chemical Engineering Science, 63, 4846–4872.

P . Katz, & C. Campbell, 2012, FDA 2011 process validation guidance: Process validation revisited, Journal of GXP Compliance, 16(4), 18.

A. Koch, J. Mädler, A. Bamberg, and L. Urbas, 2023. Digital Twins for Scale-Up in Modular Plants: Requirements, Concept, and Roadmap. In Computer Aided Chemical Engineering, 2063-2068, Elsevier.

J. Mädler, 2023. Smarte Process Equipment Assemblies zur Unterstützung der Prozessvalidierung in modularen Anlagen.



Bioprocess control using hybrid mechanistic and Gaussian process modeling

Lydia Katsini, Satyajeet Sheetal Bhonsale, Jan F.M. Van Impe

BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium

Control of bioprocesses is crucial for achieving optimal yield of various products. In this study, we focus on the fermentation of Xanthophyllomyces dendrorhous, a yeast known for its ability to produce astaxanthin, a high-value carotenoid with applications in pharmaceuticals, nutraceuticals, and aquaculture. Successful application of optimal control, requires, however, accurate and robust process models (Bhonsale et al., 2022). Since the system dynamics are non-linear and biological variability is an inherent property of the process, modeling such a system is demanding.

Aiming to tackle the system complexity, our approach in modeling this process follows Vega-Ramon et al. (2021), who combined two distinct methods: mechanistic and machine learning models. On the one hand, mechanistic models, based on existing knowledge, provide valuable insights into the underlying phenomena but are limited by their demand for accurate parameterization and may struggle to adapt to process disturbances. On the other hand, machine learning models, based on experimental data, can capture the underlying pattern without previous knowledge, however, they are also limited to the domain of the training data utilized to build them.

A key challenge in both modeling approaches is dealing with uncertainty, and more specifically biological variability, which is inherent in biological systems. To address this, we utilize Gaussian Process (GP) modeling, a flexible, non-parametric machine learning technique that provides a framework for uncertainty quantification. In this study, the use of GPs allows for robust control of the fermentation by accounting for the biological variability of the system.

Optimal control framework is implemented both for the hybrid model and the mechanistic model to identify the optimal sugar feeding strategy for maximizing astaxanthin yield. This study demonstrates how optimal control can benefit from hybrid mechanistic and machine learning bioprocess modeling.

References

Bhonsale, S. et al. (2022). Nonlinear Model Predictive Control based on multi-scale models: is it worth the complexity? IFAC-PapersOnLine, 55(23), 129-134. https://doi.org/10.1016/j.ifacol.2023.01.028

Vega-Ramon, F. et al. (2021). Kinetic and hybrid modeling for yeast astaxanthin production under uncertainty. Biotechnology and Bioengineering, 118, 4854–4866. https://doi.org/10.1002/bit.27950



Tune Decomposition Schemes for Large-Scale Mixed-Integer Programs by Bayesian Optimization

Guido Sand1, Sophie Hildebrandt1, Sina Nunes1, Chung On Yip1, Meik Franke2

1Pforzheim University of Applied Science, Germany; 2University of Twente, The Netherlands

Heuristic decomposition schemes are a common approach to approximately solve large-scale mixed-integer programs (MIPs). A typical example are moving horizon schemes applied to scheduling problems. Decomposition schemes usually exhibit parameters which can be used to tune their performance. Examples for parameters of moving horizon schemes are the horizon length and the step size of its movement. Systematic tuning approaches are seldomly reported in literature.

In a previous paper by the first two authors, Bayesian optimization was proposed as a methodological approach to systematically tune decomposition schemes for mixed-integer programs. This approach is reasonable since the tuning problem is a black-box optimization problem with an expensive to evaluate objective function: Each evaluation of the objective function of the Bayesian optimization requires the solution of the mixed-integer program using the specifically parametrized decomposition scheme. The mentioned paper demonstrated by an exemplary mixed-integer hoist scheduling model and a moving horizon scheme that the proposed approach is feasible and effective in principle.

After the proof of concept in the previous paper, the paper at hand discusses detailed results of three studies of the Bayesian optimization-based approach using the same exemplary hoist scheduling model:

  1. Examine the solution space:
    The graphs of the objective function (makespan or computational cost) of the tuning problem are analysed for small instances of the mixed-integer model considering the sequences of evaluations of the Bayesian optimization in the integer-valued space of tuning parameters. The results show that the Bayesian optimization converges relatively fast to good solutions even though the visual inspection of the graphs of the objective function exhibit only little structure.
  2. Compare different acquisition functions:
    The type of acquisition function is studied since it is assumed to be a tuning parameter of the Bayesian optimization with a major impact on its performance. Four types of acquisition functions are applied to a set of test cases and compared with respect to the mean performance and its variance. The results show a similar performance of three types and a slightly inferior performance of the fourth type.
  3. Enlarge the tuning-parameter space:
    The scaling behaviour of the Bayesian optimization-based approach with respect to the dimension of the space of tuning-parameters is studied: The number of tuning-parameters is increased from two to four parameters (three integer- and one real-valued). First results indicate that the studied approach is also feasible for real-valued tuning parameters and remains effective in higher dimensional spaces.

The results indicate that Bayesian optimization is a promising approach to tune decomposition schemes for large-scale mixed-integer programs. Future work will investigate the optimization of tuning-parameters for multiple instances in two directions. Direction one is inspired by hyperparameter optimization methods and aims at tuning one decomposition scheme which is on average optimal for multiple instances. Direction two is motivated by algorithm selection methods and aims at predicting good tuning parameters from previously optimized tuning parameters.



Enhancing industrial symbiosis to reduce CO2 emissions in a Portuguese industrial park

Ricardo Nunes Dias1,2, Fátima Nunes Serralha2, Carla Isabel Costa Pinheiro1

1Centro de Química Estrutural, IMS, Department of Chemical Engineering, Instituto Superior Técnico/Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; 2RESILIENCE – Center for Regional Resilience and Sustainability, Escola Superior de Tecnologia do Barreiro, Instituto Politécnico de Setúbal, 2839-001 Lavradio, Portugal

The primary objective of any industry is to generate profit, which often results in a focus on the efficiency of product production, not neglecting environmental and social issues. However, it is important to recognise that every process has multiple outlets, including the desired products and residues. In some cases, the effort required to process these residues further may outweigh the benefits (at first glance), leading to their disposal at a cost to the industry. Many of these residues can be sorted to enhance their value, enabling their sale instead of disposal [1].

This work presents a model developed in GAMS to identify and quantify potential symbiosis, that are already occurring, or could occur, if the appropriate relations between enterprises were established. A network flow is modelled to establish as much symbiosis as possible. The objective function maximises material exchange between enterprises while ensuring that every possible symbiosis is established. This will result in exchanges between enterprises that may involve too small amounts of wastes to be implemented. However, this outcome is beneficial for decision-makers, as having multiple sinks for a given residue can be beneficial [2,3]. EMn,j,i,n’ (exchanged material) is the main decision variable of the model, where the indices are: n and n', the donor and receiver enterprises (respectively), j, the category, and i, the residue. A binary variable, Yn,j,i,n’, is also used to allow or not a given exchange between enterprises. Each residue is categorised according to the role it has in each enterprise, as it can be an industrial residue (category 3), or a resource (category 0), categories 1 and 2 are reserved for products and subproducts, respectively. The wastes produced are converted into CO2eq (carbon dioxide equivalent), as a quantification of environmental impact. Reducing the amount of waste produced can significantly reduce the environmental impact of a given enterprise. This study assesses the largest industrial park in Portugal, which encompasses a refinery and a petrochemical plant, as the two largest facilities within the park. The direct CO2 emissions mitigated by the deployment of CO2 utilisation processes can be quantified. The establishment of a methanol plant utilising CO2 can reduce the CO2 emissions from the park by 335,560 tons. A range of CO2 utilisation processes will be evaluated to determine the optimal processes for implementation.

Even though a residue can have impacts on several environmental aspects, in this work we focus on reducing carbon emissions. Furthermore, it was found that cooperation between local enterprises and the announced investments of these enterprises can lead to significant environmental gains in the region studied.

References

[1] J. Patricio,, Y. Kalmykova,, L. Rosado,, J. Cohen,, A. Westin,, J. Gil, Resour Conserv Recycl 185 (2022). 10.1016/j.resconrec.2022.106437.

[2] D.C.Y. Foo, PROCESS INTEGRATION FOR RESOURCE CONSERVATION, 2016.

[3] L. Fraccascia,, D.M. Yazan,, V. Albino,, H. Zijm, Int J Prod Econ 221 (2020). 10.1016/j.ijpe.2019.08.006.



Blue Hydrogen Plant: Accurate Hybrid Model Based on Component Mass Flows and Simplified Thermodynamic Properties is Practically Linear

Farbod Maghsoudi, Raunak Pandey, Vladimir Mahalec

McMaster University, Canada

Current models of process plants are either rigorous first principles models based on molar flows and fractions (used for process design or optimization of operating conditions) or simple mass or volumetric flows (used for production planning and scheduling). Detailed models compute stream properties via nonlinear calculations which employ mole fractions resulting in many nonlinearities and limit plant wide models to a single time-period computation. Planning models are flow-based models, usually linear and therefore solve rapidly which makes them suitable for multi-time period representation of the plant at the expense of lower accuracy.

Once a plant is in operation, most of its streams stay at or close to the normal operating conditions which are maintained by the process control loops. Therefore, each stream can be described by its properties at these normal operating conditions (unit enthalpy, temperature, pressure, density, heat capacity, vapor fraction, etc.). It should be noted that these bulk properties per unit mass are much less sensitive to changes in stream composition if one employs mass units instead of moles (e.g. latent heat of C5 to C10 hydrocarbons varies much less in energy/mass than in energy/mole units).

Based on these observations, this work employs a new plant modelling paradigm which leads to models that have accuracy close to the rigorous models and at the same time the models are (almost) linear, thereby permitting rapid solution of large-scale single-period and multi-period models. Instead of total molar flow and mole fractions, we represent streams by mass flows of components and total mass flow. In addition, we employ simplified thermodynamic properties based on [property value/mass], which eliminates the need to use mole or mass fractions.

This paradigm has been used to model a blue hydrogen plant described in the NETL report [1]. The plant converts natural gas into hydrogen and CO2 via autothermal reforming (ATR) and water-gas shift (WGS) reactors . Oxygen is supplied from the air separation unit, while steam and electricity are supplied by a combined heat and power (CHP) unit. Stream properties at normal operating conditions have been obtained from AspenPlus plant model. Surrogate reactor models employ mass component flows and have only one bilinear term, even though their AspenPlus counterpart is a highly nonlinear RGIBBS model. The entire plant model has a handful of bilinear terms, and its results are within 1% to 2% of the rigorous AspenPlus model.

Novelty of our work is in changing the plant modelling paradigm from molar flows, fractions, and rigorous thermodynamic properties calculation to mass component flows and simplified thermodynamic properties. Rigorous properties calculation is used to update the simplified properties after the hybrid model converges. This novel plant modelling paradigm greatly reduces nonlinearities of plant models while maintaining high accuracy. Due to its rapid convergence, the same plant model can be used for optimization of operating condition, multi-time period production planning, and for scheduling.

References:

  1. Comparison of Commercial State-of-the-art Fossil-based Hydrogen Production Technologies, DOE/NETL-2022/3241, April 12, 2022


Synergies between the distillation of first- and second-generation sugarcane ethanol for sustainable biofuel production

Luiz De Martino Costa1,2, Abhay Athaley3, Zach Losordo4, Adriano Pinto Mariano1, John Posada2, Lee Rybeck Lynd5

1Universidade Estadual de Campinas, Brazil; 2Delft Universtity of Technology, The Netherlands; 3National Renewable Energy Laboratory, United States; 4Terragia Biofuel Incorporated, United States; 5Dartmouth College, United States

Despite the yearly opening of second-generation (2G) sugarcane distilleries in Brazil, 2G bagasse ethanol distillation remains a challenging unit operation due to low-titer ethanol having increased heat duty and production costs per ethanol mass produced. For this reason, and because of the logistics involving transporting sugarcane bagasse, 2G bagasse ethanol is currently commercially produced in plants annexed to first-generation (1G) ethanol plants, and this configuration can likely become one path of evolution for 2G ethanol production in Brazil.

In the context of 1G2G integrated sugarcane ethanol plants, mixing ethanol beers from both processes may reduce the production costs of 2G ethanol (personal communication with a 2G ethanol producer). However, the energy, process, economic, and environmental advantages of this integrated model compared to its stand-alone counterpart remain unclear. Thus, this work focused on the energy synergies between the distillation of integrated first- and second-generation sugarcane ethanol mills.

For this investigation, integrated and separated 1G2G distillation simulations were conducted using Aspen Plus v.10. The separated distillation arrangement consisted of two RadFrac columns: one to distillate 1G beer and another to distillate 2G beer until near azeotropic levels (91.5% wt ethanol). In the integrated distillation arrangement, two columns were used: one to rectify 2G beer and another to distillate 2G vapor and 1G beer until azeotropic levels. The mass flow ratio between 1G to 2G beer was assumed to be 3:1, both mixtures enter the columns as saturated liquid and consist of only water and ethanol. The 1G beer titer was assumed 100 g/L and the 2G beer titer was varied from 10 to 40 g/L to understand and compare the energy impacts for low titer 2G beer. The energy analysis was conducted by quantifying and comparing the reboilers’ duty and distilled ethanol production to calculate heating energy demand.

1G2G integration resulted in an overall heating energy demand for ethanol distillation at a near-constant value of 3.25 MJ/kgEthanol, regardless of the 2G ethanol titer. In comparison, the separated scenario had energy demand ranging from 3.60 (40 g/L 2G beer titer) to 3.80 (10 g/L 2G beer titer) MJ/kgEthanol, meaning that it is possible to obtain energy savings from 9.5% to 14.5%. Additionally to the energy savings, the energy demand value found for the integrated scenario is almost the same for only 1G beer. The main reason for these results is that the energy demand for 2G ethanol is reduced due to the reflux ratio necessary for distillation, lowering in an integrated 1G2G column to be near to only 1G conditions. This can be observed in the integrated scenario by the 2G ethanol heat demand in isolation being the near-constant value of 3.35 MJ/kgEthanol for the studied range of 2G ethanol titer while changing from 5.81 to 19.92 MJ/kgEthanol in the separated scenario. These results indicate that distillation integration should be chosen for the 1G2G sugarcane distilleries for a less energy-demanding process, and, therefore, more sustainable biofuel.



Development of anomaly detection models independent of noise and missing values using graph Laplacian regularization

Yuna Tahashi, Koichi Fujiwara

Department of Materials Process Engineering, Nagoya University, Japan

Process data frequently suffer from imperfections such as missing values or measurement noise due to sensor malfunctions. Such data imperfections pose significant challenges to process fault detection, potentially leading to false positives or overlooking rare faulty events. Fault detection models with high sensitivity may excessively detect these irregularities, which leads to disturbing the identification of true faulty events.

To address this challenge, we propose a new fault detection model based on an autoencoder architecture with graph Laplacian regularization that considers specific temporal relationships among time series data. Laplacian regularization assumes that neighboring samples remain similar, imposing significant penalties when neighboring samples lack smoothness. In addition, graph Laplacian regularization can take the smoothness of graph structures into account. Since normal samples in close temporal proximity should keep similar characteristics, a graph can be utilized to represent temporal dependencies between successive samples in a time series. In the proposed model, the nearest correlation (NC) method which is a structural learning algorithm considering the correlation among variables is used. Using graph Laplacian regularization with the NC method, it is expected that missing values or measurement noise are corrected automatically from the viewpoint of the correlation among variables in the normal process condition, and only significant changes like faulty events can be detected because they cannot be corrected sufficiently. The proposed method has broad applicability to various models because the graph regularization term based on the NC method is simply added to the objective function when a model is trained.

To demonstrate the efficacy of our proposed model, we conducted a case study using simulation data generated from a vinyl acetate monomer (VAM) production process, employing a rigorous process model built on Visual Modeler (Omega Simulation Inc., Japan). In the VAM simulator, six faulty scenarios, such as sudden changes in feed composition and pressure, were generated.

The results show that the fault detection model with graph Laplacian regularization provides higher fault detection accuracy compared to the model without graph Laplacian regularization in some faulty scenarios. The false alarm rate (FAR) and the missing alarm rate (MAR) were improved by up to 0.4% and 50.1%, respectively. In addition, the detection latency (DL) was shortened at most 1,730 seconds. Therefore, it was confirmed that graph-Laplacian regularization with the NC method is particularly effective for fault detection.

The use of graph Laplacian regularization with the NC method is expected to realize a more reliable fault detection model, which would be capable of robustly handling noise and missing values, reducing false positives, and identifying true faulty events rapidly. This advancement promises to enhance the efficiency and reliability of process monitoring and control across various industrial applications.



Comparing incinerator kiln model predictions with measurements of industrial plants

Lionel Sergent1,2, Abderrazak Latifi1, François Lesage1, Jean-Pierre Corriou1, Thibaut Lemeulle2

1Université de Lorraine, Nancy, France; 2SUEZ, Schweighouse-sur-Moder, France

Roughly 30% of municipal waste is incinerated in the EU. Because of the heterogeneity of the waste and the lack of local measurements, the industry relies on traditional control strategy, including manual piloting. Advanced modeling strategies have been used to gain insights on the design of such facilities. Despite two decades of scientific efforts, obtaining good model accuracy and reliability is still challenging.
In this work, the predictions of a phenomelogical model based on the simplification of literature works is compared to the measurements of an industrial incinerator. The model consists of two sub-models, namely the bed model and the freeboard model. The bed refers to the solid waste traveling through the kiln, while the freeboard refers to the gaseous space above the bed where the flame resides.
The bed of waste is simulated with finite volumes and a walking columns approach, while the freeboard is modeled with the zone method and the interface with the boiler is taken into account through a three layer system. The code implementation of the model takes into account various geometry and other plant important characteristics in a way that allows to easily simulate different types of grate kilns.
The incinerator used as a reference for the development of the model is located in Alsace, France. It features a waste chute, a three zone grate, water walls in the kiln, four secondary air injection points and a cooling water injection. The simulation results are compared with temperature and gas composition measurements. Except for oxygen concentration, gas composition data needs to be retrocalculated from stack gas analyzers. Simulated bed height is compared with the observable fraction of the actual bed. The model reproduces well static behavior and general dynamic tendencies.
The very strong model sensitivity to particle diameter is discussed. Additionally, the model is configured for two other incinerators and shallow comparison with industrial data is performed to assess the generality of the model.
Despite encouraging results, the importance of more work regarding the solid behavior is highlighted.



Modeling the freeboard of a municipal waste incinerator

Lionel Sergent1,2, Abderrazak Latifi1, François Lesage1, Jean-Pierre Corriou1, Thibaut Lemeulle2

1Université de Lorraine, Nancy, France; 2SUEZ, Schweighouse-sur-Moder, France

Roughly 30% of municipal waste is incinerated in the EU. Despite the apparent simplicity, the heterogeneity of the waste and the scarcity of local measurement make waste incineration a challenging process to mathematically describe.
Most of the modeling efforts are concentrated on the bed behavior. However, the gaseous space above the bed, named the freeboard, also needs to be modeled in order to mathematically represent the behavior of the kiln. Indeed, there is a tight coupling between these two spaces, as the bed feeds the freeboard with pyrolysis gases allowing a flame to form in the freeboard, while the flame radiates heat back to the bed, allowing the drying and the pyrolysis to take place.
The freeboard may be modeled using various techniques. The most accurate and commonly used technique is CFD, generally with established commercial software. CFD allows to obtain detailed flow characteristics, which is very valuable to optimize secondary air injection. However, CFD setup is quite heavy and harder to interface with the custom codes typically used for bed modeling. In this work, we propose a coarse model, more adapted to operational use. Each grate zone is associated a with a freeboard gas space where homogeneous combustion reactions occur. Radiative heat transfer is modeled using the zonal method. Three layers are used to represent the interface with the boiler and the thermal inertia the refractory induces. Flow description is reduced to its minimum and solved through the combination of the continuity equation and the ideal gas law, without momentum balance.
The resulting mathematical model is a system of ODEs than can be easily solved with general purpose stiff ODE solvers based on backward differentiation formulas. Steady state simulation results show good agreements with the few measurements available. Dynamic effects are hard to validate due to the lack of local measurements, but general tendencies seem well represented. The coarse freeboard representation is shown to be enough to obtain the radiation profile arriving on the bed.



Superstructure as a communication tool in pre-emptive life cycle design engaging society: Findings from case studies on battery chemicals, plastics, and regional resources

Yasunori Kikuchi1, Ayumi Yamaki1, Aya Heiho2, Jun Nakatani1, Shoma Fujii1, Ichiro Daigo1, Chiharu Tokoro1,3, Shisuke Murakami1, Satoshi Ohara1

1The University of Tokyo, Japan; 2Tokyo City University, Japan; 3Waseda University, Japan

Emerging technologies require sophisticated design and optimization engaging social systems due to their innovative and rapidly advancing characteristics. Despite the fact that they have the significant capacity to change material flows and life cycles by their penetration, their future development and sociotechnical regimes, e.g., regulatory environment, societal infrastructure, and market, are still uncertain and may affect the optimal systems to be implemented in the future. Multiple technologies are being considered simultaneously for a single issue, and appropriate demarcation and synergistic effects are not being evaluated. Superstructures in process systems engineering can visualize all alternative candidates for design problems and contain emerging technologies as such candidates.

In this study, we are tackling pre-emptive life cycle design in social challenges implementing emerging technologies with case studies on battery chemicals, plastics, and regional resources. Appropriate alternative candidates were generated with stakeholders in industries and national projects by constructing superstructures. Based on the consensus superstructures, life cycles have been proposed considering life cycle assessment (LCA) by the simulations of applying emerging technologies.

Regarding the battery chemistry issue, the nickel-manganese-cobalt (NMC) type lithium batteries have become dominant, although the lithium iron phosphate (LFP) type has also been considered as a candidate. The battery chemistries and recycling technologies are emerging technologies in this issue and superstructures were proposed for recycling systems (Yonetsuka et al., 2024). Through communication with the managers of Japanese national projects on battery technology, the scenarios on battery resource circulation have been developed. The issue of plastics has become the design problem of systems applying biomass-derived and recycle-based carbon sources (Meng et al., 2023; Kanazawa et al., 2024). Based on superstructure (Nakamura et al., 2023), the scenario planning and LCA have been conducted and shared with stakeholders for designing future plastic resource circulations. Regional resources could be circulated by implementing multiple technologies (Kikuchi et al., 2023). Through communication with residents and stakeholders, the demonstration test was conducted.

The case studies in this study find the facts below. The superstructures with technology assessments could support the common understanding of the applicable technologies and their pros and cons. Because technologies could not be implemented without social acceptance, CAPE tools should be able to discuss the sociotechnical and socioeconomical aspects of process systems.

D. Kanazawa et al., 2024, Scope 1, 2, and 3 Net Zero Pathways for the Chemical Industry in Japan, J. Chem. Eng. Jpn., 57 (1). DOI: 10.1080/00219592.2024.2360900.

Y. Kikuchi et al., 2024, Prospective life-cycle design of regional resource circulation applying technology assessments supported by CAPE tools, Comput. Aid. Chem. Eng., 53, 2251-2256

F. Meng et al., 2023, Planet compatible pathways for transitioning the chemical industry, Proc. Natl. Acad. Sci., 120 (8) e2218294120.

T. Nakamura et al., 2024, Assessment of Plastic Recycling Technologies Based on Carbon Resource Circularity Considering Feedstock and Energy Use, Comput. Aid. Chem. Eng., 53, 799-804

T. Yonetsuka et al., 2024, Superstructure Modeling of Lithium-Ion Batteries for an Environmentally Conscious Life-Cycle Design, Comput. Aid. Chem. Eng., 53, 1417-1422



A kinetic model for transesterification of vegetable oils catalyzed by sodium methylate—Insights from inline Raman spectroscopy

Ilias Bouchkira, Mohammad El Wajeh, Adel Mhamdi

Process Systems Engineering (AVT.SVT), RWTH Aachen University, 52074 Aachen, Germany

The transesterification of triolein by methanol for biodiesel production is of great interest due to its potential to provide a sustainable and environmentally friendly alternative to fossil fuels. Biodiesel can be produced from renewable sources like vegetable oils, thereby contributing to reducing greenhouse gas emissions and dependency on non-renewable energy. The process also yields glycerol, a valuable by-product that is used in various industries. Given the growing global demand for cleaner energy and sustainable chemical processes, understanding and modeling the kinetics of biodiesel production is critical for improving efficiency, reducing costs, and ensuring scalability of biodiesel production, especially for model-based process design and control (El Wajeh et al., 2023).

We present a kinetic model of the transesterification of triolein by methanol to produce fatty acid methyl esters (FAME), i.e. biodiesel, and glycerol. For parameter estimation, we perform transesterification experiments using an automated lab-scale system consisting of a semi-batch reactor, dosing pumps, stirring system and a cooling/heating thermostat. An important contribution in this work is that we use inline Raman spectroscopy instead of taking samples for offline analysis. The application of Raman spectroscopy enables continuous concentration monitoring of key species involved in the reaction, i.e. FAME, triglycerides, methanol, glycerol and catalyst.

We employ sodium methylate as a catalyst, addressing a gap in the literature, where kinetic parameter values for the transesterification with this catalyst are lacking. To ensure robust parameter estimation, we perform a global sensitivity-based estimability analysis (Bouchkira et al., 2024), confirming that the experimental data is sufficient for accurate model calibration. The parameter estimation is carried out using genetic algorithms, and we determine the confidence intervals of the estimated parameters through Hessian matrix analysis. This approach ensures reliable and meaningful model parameters for a broad range of operating conditions.

We perform experiments for several temperatures relevant for industrial application, with a specific focus on the range around 60°C. The Raman probe used inside the reactor is calibrated offline with high precision, achieving excellent calibration accuracy for concentrations (R2 = 0.99). The results show excellent agreement with experimental data. The predicted concentrations from the model align with experimental data, with deviations generally under 2%, demonstrating the accuracy and reliability of the proposed kinetic model across different operating conditions.

References

El Wajeh, M., Mhamdi, A., & Mitsos, A. (2023). Dynamic modeling and plantwide control of a production process for biodiesel and glycerol. Industrial & Engineering Chemistry Research, 62(27), 10559-10576.

Bouchkira, I., Latifi, A. M., & Benyahia, B. (2024). ESTAN—A toolbox for standardized and effective global sensitivity-based estimability analysis. Computers & Chemical Engineering, 186, 108690.



Integration of renewable energy and reversible solid oxide cells to decarbonize secondary aluminium production and urban systems

Daniel Florez-Orrego1, Dareen Dardor1, Meire Ribeiro Domingos1, Reginald Germanier2, François Maréchal1

1Ecole Polytechnique Federale de Lausanne, Switzerland; 2Novelis Sierre S.A.

The aluminium recycling and remelting industry is a key actor in advancing a sustainable and circular economy within the aluminium sector. Currently, energy conversion processes in secondary aluminium production are largely dependent on natural gas, exposing the industry to volatile market prices and contributing to significant environmental impacts. To mitigate this, efforts are focused on reducing reliance on fossil fuels by incorporating renewable energy and advanced cogeneration systems. Due to the intermittent nature of renewable energy, a combination of technologies can be employed to improve energy integration and enhance process resilience in heavy industry. These technologies include energy storage systems, oxycombustion furnaces, carbon abatement, power-to-gas technologies, and biomass thermochemical conversion. This configuration allows for seasonal storage of renewable energy, optimizing its use during periods of high electricity and natural gas prices. High-temperature reversible solid oxide cells play a critical role in balancing energy needs, while increasing exergy efficiency within the integrated facility, offering advantages over traditional cogeneration systems. When thermally integrated into an aluminium remelting plant, the whole system functions as an industrial battery (i.e. fuel and gases storage), cascading low-grade waste heat to a nearby urban agglomeration. The waste heat temperature from aluminium furnaces and biomass energy conversion technologies supports the integration of high-temperature reversible solid oxide cells. The post-combustion of tail gas from these cells provides heat to the melter furnace, while the electricity generated can be used elsewhere in the system, such as for powering electrical furnaces, rolling processes, ancillary demands, and district heating heat pumps. In fact, by optimally tuning the operating parameters of the rSOC, which in turn depend on the partial load and the utilization factor, the heat-to-power ratio can be modulated to satisfy the energy demands of all the industrial and urban systems involved. The chemically-driven heat recovery in the reforming section is also compared to other energy recovery systems, such as supercritical CO2 power cycles and preheater-melter furnace integration. In all the cases, the low-grade waste heat recovery, typically rejected to environment, is used to supply the heat to the city using an anergy district heating network via heat pumping systems. In this advanced integrated scenario, energy consumption increases by only 30% compared to conventional systems based on natural gas and biomass combustion. However, CO2 emissions are reduced by a factor of three, particularly when combined with a carbon management and sequestration system. Further reductions in emissions can be achieved if higher shares of renewable electricity become available. Moreover, the use of local renewable energy resources promotes the energy security and sustainability of industries traditionally reliant on fossil energy resources.



A Novel Symbol Recognition Framework for Digitization of Piping and Instrumentation Diagrams

Zhiyuan Li1, Zheqi Liu2, Jinsong Zhao1, Huahui Zhou3, Xiaoxin Hu3

1Department of Chemical Engineering, Tsinghua University, Beijing, China; 2Department of Computer Science and Engineering, University of California, San Diego, US; 3Sinopec Ningbo Engineering Co., Ltd, Ningbo, China

Piping and Instrumentation Diagrams (P&IDs) are essential in the chemical industry, but most exist as scanned images, limiting seamless integration into digital workflows. This paper proposes a method to digitize P&IDs and automate unit operation selection for Hazard and Operability (HAZOP) analysis. We combined convolutional neural networks and transformers to detect devices, pipes, instrumentation, and text in image-format P&IDs. Then we reconstructed the process topology and control structures for each P&ID using distance metric learning. Furthermore, multiple P&IDs were integrated into a comprehensive chemical process knowledge graph by stream and equipment identifiers. To facilitate automated HAZOP analysis, we developed a node-merging algorithm that groups equipment according to predefined unit operation categories, thereby identifying specific analysis objects for intelligent HAZOP analysis.

An evaluation conducted on a dataset comprising 500 simulated Piping and Instrumentation Diagrams (P&IDs) revealed that the device recognition process achieved over 99% precision and recall, with 93% accuracy in text extraction. Processing time was reduced by threefold compared to conventional methods, and the node-merging algorithm yielded satisfactory results. This study improves data sharing in chemical process design and facilitates automated HAZOP analysis.



Twin Roll Press Washer Blockage Prediction: A Pulp and Paper Plant Case Study

Bryan Li1,2, Isaac Severinsen1,2, Wei Yu1,2, Timothy Walmsley2, Brent Young1,2

1Department of Chemical and Materials Engineering, The University of Auckland, Auckland 1010, New Zealand; 2Ahuora – Centre for Smart Energy Systems, School of Engineering, The University of Waikato, Hamilton 3240, New Zealand

A process fault is considered an unacceptable deviation from the normal state. Process faults can incur significant product and revenue loss, as well as damage to personnel and equipment. The aim of this research is to create a self-learning digital twin that closely replicates and interfaces with a physical plant to appropriately advise plant operators of a potential plant fault in the near future. A key challenge to accurately predicting process faults is the lack of fault data due to the scarcity of fault occurrences. To overcome this challenge, this study creates synthetic data indistinguishable from the real limited process fault datasets with generative artificial intelligence, so that deep learning algorithms can better learn the fault behaviours. The model capability is further enhanced with real-time fault library updates employing methods of low computational cost: principal component analysis and transfer learning.

A pulp bleaching and washing process is used as an industrial case study. This process is connected to downstream black liquor evaporators and chemical recovery boilers. Successful development of this model can aid the decarbonisation progress in pulp and paper industry by decreasing energy wastage, water usage, and process downtime.



Addressing Incomplete Physical Models in Chemical Processes: A Novel Physics-Informed Neural Network Approach

Zhiyuan Xie, Feiya Lv, Jinsong Zhao

Tsinghua University, China, People's Republic of

In recent years, machine learning—particularly neural networks—has exerted a transformative influence on various facets of chemical processes, including variable prediction, fault detection, and fault diagnosis. However, when data is incomplete or insufficient, purely data-driven neural networks often encounter difficulties in achieving high predictive accuracy. Physics-Informed Neural Networks (PINNs) address these limitations by embedding physical knowledge and prior domain expertise into the neural network framework, thereby constraining the solution space and facilitating effective training with limited data. This methodology offers notable advantages in handling scarce industrial datasets.Despite these strengths, PINNs depend on explicit formulations of nonlinear partial differential equations (PDEs), which present significant challenges when modeling the intricacies of complex chemical processes. To overcome these limitations, this study introduces a novel PINN architecture capable of accommodating processes with incomplete PDE descriptions. Experimental evaluations on a Continuous Stirred Tank Reactor (CSTR) dataset, along with real-world industrial datasets, validate the proposed architecture’s effectiveness and demonstrate its feasibility in scenarios involving incomplete physical models.



A Physics-based, Data-driven Numerical Framework for Anomalous Diffusion of Water in Soil

Zeyuan Song, Zheyu Jiang

Oklahoma State University, United States of America

Precision modeling and forecasting of soil moisture are essential for implementing smart irrigation systems and mitigating agricultural drought. Agro-hydrological models, which describe irrigation, precipitation, evapotranspiration, runoff, and drainage dynamics in soil, are widely used to simulate the root-zone (top 1m of soil) soil moisture content. Most agro-hydrological models are based on the standard Richards equation [1], a highly nonlinear, degenerate elliptic-parabolic partial differential equation (PDE) with first order time derivative. However, research has shown that standard Richards equation is unable to model preferential flow in soil with fractal structure. In such a scenario, the soil exhibits anomalous non-Boltzmann scaling behavior. For soils exhibiting non-Boltzmann scaling behavior, the soil moisture content is a function of $frac{x}{t^{alpha/2}}$, where $x$ is the position vector, $t$ denotes the time, and $alpha$ is a soil-dependent parameter indicating subdiffusion ($alpha in (0,1)$) and superdiffusion ($alpha in (1,2)$). Incorporating this functional form of soil moisture into the Richards equation leads to a generalized, time-fractional Richards equation based on fractional time derivatives. Clearly, solving the time-fractional Richards equation for accurate modeling of water flow dynamics in soil faces extensive theoretical and computational challenges. Naïve approaches typically discretizes the time-fractional Richards equation using finite difference method (FDM). However, the stability of FDM is not guaranteed. Furthermore, the underlying physical laws (e.g., mass conservation) are often lost during the discretization process.

Here, we propose a novel numerical method that synergistically integrates finite volume method (FVM), adaptive linearization scheme, global random walk, and neural network to solve the time-fractional Richards equation. Specifically, the fractional time derivatives are first approximated using trapezoidal quadrature formula, before discretizing the time-fractional Richards equation by FVM. Leveraging our previous findings [2], we develop an adaptive linearization scheme to solve the discretized equation iteratively, thereby overcoming the stability issues associated with directly solving a stiff and sparse matrix equation. To better preserve the underlying physics during the solution process, we reformulate the linearized equation using global random walk algorithm. Next, as opposed to making the prevailing assumption that, in any discretized cell, the soil moisture is proportional to the number of particles, we show that this assumption does not hold. Instead, we propose to use neural networks to model the highly nonlinear relationships between the soil moisture content and the number of particles. We illustrate the accuracy and computational efficiency of our proposed physics-based, data-driven numerical method using numerical examples. Finally, a simple way to efficiently identify the parameter is developed to match the solutions of time-fractional Richards equation with experimental measurements.

References

[1] L.A. Richards, Capillary conduction of liquids through porous mediums, Physics, 1931, 1(5): 318-333.

[2] Z. Song, Z. Jiang, A Novel Data-driven Numerical Method for Hydrological Modeling of Water Infiltration in Porous Media, arXiv preprint arXiv:2310.02806, 2023.



Supersaturation Monitoring for Batch Crystallization using Empirical and Machine Learning Models

Mohammad Reza Boskabadi, Merlin Alvarado Morales, Seyed Soheil Mansouri, Gürkan Sin

Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 228A, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

Batch crystallization serves as a downstream process within the pharmaceutical and food industries, providing a high degree of flexibility in the purification of a wide range of products. Effective control over the crystal size distribution (CSD) is essential in these processes to minimize waste and the need for recycling, as crystals falling outside the target size range are typically considered waste or are recycled (Boskabadi et al., 2024). The resulting CSD is significantly influenced by the supersaturation (SS) of the mother liquor, a key parameter driving crystal nucleation and growth. Supersaturation is governed by several nonlinear factors, including concentration, temperature, purity, and other quality parameters of the mother liquor, which are often determined through laboratory analysis. Due to the complexity of these dependencies, no direct measurement method or single instrument exists for supersaturation assessment (Morales et al., 2024). This lack of efficient monitoring contributes to the GHG emissions associated with sugar production, estimated at 1.47 kg CO2/kg sugar (Li et al., 2024).

The primary objective of this study is to develop a machine learning (ML)-based model to predict sugar supersaturation using the sugar solubility dataset provided by Van der Sman (2017), aiming to establish correlations between temperature and sucrose solubility. To this end, different ML models were developed, and each model underwent rigorous statistical evaluations to verify its ability to capture solubility trends effectively. The results were compared to the saturation curve predicted by the Flory-Huggins thermodynamic model. The ML model simplifies predictions by accounting for impurities and temperature dependencies, validated using experimental datasets. The findings indicate that this predictive model allows for more precise dynamic control of the crystallization process. Finally, the effect of the developed model on sustainable sugar production was investigated. It was demonstrated that using this model may reduce the mean batch residence time during the crystallization stage, lowering energy consumption, reducing the CO2 footprint, increasing production capacity, and ultimately contributing to sustainable process development.

References:

Boskabadi, M. R., Sivaram, A., Sin, G., & Mansouri, S. S. (2024). Machine Learning-Based Soft Sensor for a Sugar Factory’s Batch Crystallizer. In Computer Aided Chemical Engineering (Vol. 53, pp. 1693–1698). Elsevier.

Li, K., Zhao, M., Li, Y., He, Y., Han, X., Ma, X., & Ma, F. (2024). Spatiotemporal Trends of the Carbon Footprint of Sugar Production in China. Sustainable Production and Consumption, 46, 502–511.

Morales, H., di Sciascio, F., Aguirre-Zapata, E., & Amicarelli, A. (2024). Crystallization Process in the Sugar Industry: A Discussion On Fundamentals, Industrial Practices, Modeling, Estimation and Control. Food Engineering Reviews, 1–29.

Van der Sman, R. G. M. (2017). Predicting the solubility of mixtures of sugars and their replacers using the Flory–Huggins theory. Food & Function, 8(1), 360–371.



Role of process integration and renewable energy utilization for the decarbonization of the watchmaking sector.

Pullah Bhatnagar1, Daniel Alexander Florez Orrego1, Vibhu Baibhav1, François Maréchal1, Manuele Margni2

1EPFL, Switzerland; 2HES-SO Valai Wallis, Switzerland

Switzerland is the largest exporter of watches and clocks worldwide. The Swiss watch industry contributes 4% to Switzerland's GDP, amounting to CHF 25 billion annually. As governments and international organizations accelerate efforts to achieve net-zero emissions, industries are increasingly pressured to adopt more sustainable practices. Decarbonizing the watch industry is therefore essential. One way to improve sustainability is by enhancing energy efficiency, which can significantly reduce the consumption of various energy sources, leading to lower emissions. Additionally, recovering waste heat from different industrial processes can further enhance energy efficiency.

The watch industry operates across five distinct typical days, each characterized by different levels of average power demand, plant activity, and duration. Among these, typical working days experience the highest energy demand, while vacation periods see the lowest. Adjusting the timing of vacation periods—such as shifting the month when the industry closes—can also improve energy efficiency. This becomes particularly relevant with the integration of decarbonization technologies like photovoltaic (PV) and solar thermal (ST) systems, which generate more energy during the summer months.

This work also explores the techno-economic feasibility of incorporating energy storage solutions (both for heat and electricity) and developing a tailored charging and dispatch strategy. The strategy would be designed to account for the variations in energy demand observed across the different characteristic time periods within a month.



An Integrated Machine Learning Framework for Predicting HPNA Formation in Hydrocracking Units Using Forecasted Operational Parameters

Pelin Dologlu1, Berkay Er1, Kemal Burçak Kaplan1, İbrahim Bayar2

1SOCAR Turkey, Digital Transformation Department, Istanbul 34485, Turkey; 2SOCAR STAR Oil Refinery, Process Department, Aliaga, Izmir 35800, Turkey

The accumulation of heavy polynuclear aromatics (HPNAs) in hydrocracking units (HCUs) poses significant challenges to catalyst performance and process efficiency. This study proposes an integrated machine learning framework that combines ridge regression, K-nearest neighbors (KNN), and long short-term memory (LSTM) neural networks to predict HPNA formation, enabling proactive process management. For the training phase, weighted average bed temperature (WABT), catalyst deactivation phase—classified using unsupervised KNN clustering—and hydrocracker feed (HCU feed) parameters obtained from laboratory analyses are utilized to capture the complex nonlinear relationships influencing HPNA formation. In the simulation phase, forecasted WABT values are generated using a ridge regression model, and future HCU feed changes are derived from planned crude oil blend data provided by the planning department. These forecasted WABT values, predicted catalyst deactivation phases, and anticipated HCU feed parameters serve as inputs to the LSTM model for predicting future HPNA levels. This approach allows us to simulate various operational scenarios and assess their impact on HPNA accumulation before they manifest in the actual process. By identifying critical process parameters and their influence on HPNA formation, the model enhances process engineers' understanding of the hydrocracking operation. The ability to predict HPNA levels in advance empowers engineers to implement corrective actions proactively, such as adjusting feed compositions or operating conditions, thereby mitigating HPNA formation and extending catalyst life. The integrated framework demonstrates high predictive accuracy and robustness, underscoring its potential as a valuable tool for optimizing HCU operations through advanced predictive analytics and informed decision-making.



Towards the Decarbonization of a Conventional Ammonia Plant by the Gradual Incorporation of Green Hydrogen

João Fortunato, Pedro Castro, Diogo A. C. Narciso, Henrique A. Matos

Centro de Recursos Naturais e Ambiente, Department of Chemical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal

As the second most produced chemical worldwide, ammonia (NH3) production depends heavily on fossil fuel consumption. The ammonia production process is highly energy-intensive and results in the emission of 1-2 % of total carbon dioxide emissions1 and the consumption of 2 % of the energy consumed worldwide1. Ammonia is industrially produced by the Haber-Bosch (HB) process, by reacting hydrogen with nitrogen. Hydrogen can be obtained from a variety of feedstocks, such as coal and naphtha, but is typically obtained from the processing of natural gas, via Steam Methane Reforming (SMR)1. In the latter case, atmospheric air can be used directly as a nitrogen source without the need for previous separation, since the oxygen is completely consumed by the methane partial oxidation reaction2.

The ammonia industry is striving for decarbonization, driven by increasing carbon neutrality policies and energy independence targets. In Europe, the Renewable Energy Directive III requires that 42 % of the hydrogen used in industrial processes come from renewable sources by 20303, setting a critical shift towards more sustainable ammonia production methods.

The literature includes many studies focusing on the production of low-carbon ammonia entirely from green hydrogen, without considering its production via SMR. However, this approach could threaten the competitiveness of the current industry and the loss of opportunity to continue valorizing previous investments.

This work addresses the challenges involved with the incorporation of green hydrogen into a conventional ammonia production plant (methane-fed HB process). An Aspen Plus V14 model was developed, and two different green hydrogen incorporation strategies were tested: S-I and S-II. These were inspired by existing operating procedures at one real-life plant, therefore the model simulations main focus are to determine the feasible limits of using an existing conventional NH3 plant and observe the associated main KPIs, when green H2 is available to add.

The S-I strategy reduces the production of grey hydrogen by reducing natural gas and process steam in the SMR. The intake of green hydrogen allows hydrogen and ammonia production to remain fixed.

In strategy S-II, grey hydrogen production remains unchanged, resulting in higher total hydrogen production. By taking in larger quantities of process air, higher NH3 production can be achieved.

These strategies introduce changes to the SMR process and NH3 synthesis, which imply modifications to the operating conditions of the plant. These changes lead to a technical limit for the incorporation of green hydrogen into the conventional HB process. Nevertheless, both strategies make it possible to reduce carbon emissions per quantity of NH3 produced and promote the gradual decarbonization of the current ammonia industry.

1. IEA International Energy Agency. Ammonia Technology Roadmap - Towards More Sustainable Nitrogen Fertiliser Production. https://www.iea.org/reports/ammonia-technology-roadmap (2021).

2. Appl, M. Ammonia, 2. Production Processes. in Ullmann’s Encyclopedia of Industrial Chemistry (Wiley, 2011). doi:10.1002/14356007.o02_o11.

3. RED III Directive (UE) 2023-2413 of 18 October 2023.



Gate-to-Gate Life Cycle Assessment of CO₂ Utilisation in Enhanced Oil Recovery: Sustainability and Environmental Impacts in Dukhan Field, Qatar

Razan Sawaly, Ahmad Abushaikha, Tareq Al-Ansari

hbku, Qatar

This study examines the potential impact of implementing a cap and trade system to reduce CO₂ emissions in Qatar's industrial sector, which is a significant contributor to global emissions. Using data from seven key industries, the research sets emission caps, allocates allowances through a grandfathering method, and allows trading of these allowances to create economic incentives for emission reductions. The study utilizes a model with a carbon price of $12.50 per metric ton of CO₂ and compares baseline emissions with future reduction strategies. Results indicate that while some industrial plants, such as the LNG and methanol plants, achieved substantial emission reductions and financial surpluses through practices like carbon capture and switching to hydrogen, others continued to face deficits. The findings highlight the system's potential to promote sustainable practices, suggesting that tighter caps and auction-based allowance allocations could further enhance the effectiveness of the cap and trade system in Qatar's industrial sector.



Robust Flowsheet Synthesis for Ethyl Acetate, Methanol and Water Separation

Aayush Gupta, Kartavya Maurya, Nitin Kaistha

Indian institute of technology Kanpur, India

Ethyl acetate and methanol are commonly used solvents in the pharmaceutical, textile, dye, fine organic, and paint industry [1] [2]. The waste solvent from these industries often contains EtAc and MeOH in water in widely varying proportions. Sustainability concerns reflected in increasingly stringent waste discharge regulations now dictate complete recovery, recycle and reuse of the organic species from the waste solvent. For the EtAc-MeOH-water waste solvent, simple distillation cannot be used due to the presence of a homogeneous EtAc-MeOH azeotrope and a heterogeneous EtAc-water azeotrope. Synthesizing a feasible flowsheet structure that separates a given waste solvent mixture into its nearly pure constituents (EtAc, MeOH and water) then becomes challenging. The flowsheet structure, among other things, depends on the waste solvent composition. A flowsheet that is feasible for a dilute waste solvent mixture may become infeasible for a more concentrated waste solvent. Given that the flowsheet structure, once chosen, remains fixed and cannot be changed, and that wide variability in the waste solvent composition is expected, in this work, we propose a “robust” flowsheet structure with guaranteed feasibility, regardless of the waste solvent composition. Such a “robust” flowsheet structure has the potential to significantly improve the economic viability of a waste solvent processing plant, as the same equipment can be used to separate the wide range of received waste solvents.

The key to the robust flowsheet design is the use of a liquid-liquid extractor (LLX) with recycled water as the solvent. For a sufficiently high-water rate to the LLX, the raffinate composition is close to the EtAc-water edge (nearly MeOH free), on the liquid-liquid envelope and in the EtAc rich distillation region. The raffinate is distilled to obtain pure EtAc bottoms product and the overhead vapour is condensed and decanted with the organic layer refluxed into the column. The aqueous distillate is mixed with the MeOH rich extract and stripped to obtain an EtAc free MeOH-water bottoms. The overhead vapour is condensed and recycled back to the LLX. The MeOH-water bottoms is further distilled to obtain pure MeOH distillate and pure water bottoms. A fraction of the bottoms is recirculated to the LLX as the solvent feed. Converged designs are obtained for an equimolar waste solvent composition as well as an EtAc rich, MeOH rich and water rich compositions to demonstrate the robustness of the flowsheet structure to a large change in the waste solvent composition.

References

[1] C. S. a. S. M. J. a. C. W. A. a. C. D. J. Slater, "Solvent use and waste issues," Green chemistry in the pharmaceutical industry, pp. 49-82, 2010.

[2] T. S. a. L. Z. a. C. H. a. Z. H. a. L. W. a. F. Y. a. H. X. a. S. Longyan, "Method for separating and recovering ethyl acetate and methanol". China Patent CN102746147B, May 2014.



Integrating offshore wind energy into the optimal deployment of a hydrogen supply chain: a case study in Occitanie

Melissa Cherrouk1,2, Catherine Azzaro-Pantel1, Marie Robert2, Florian Dupriez Robin2

1France Energies Marines / Laboratoire de Génie Chimique, France; 2France Énergies Marines, Technopôle Brest-Iroise, 525 Avenue Alexis de Rochon, 29280, Plouzané, France

The urgent need to mitigate climate change and reduce dependence on fossil fuels has led to the exploration of alternative energy solutions, with green hydrogen emerging as a key player in the global energy transition. Thus, the aim of this study is to assess the feasibility and competitiveness of producing hydrogen at sea using offshore wind energy, evaluating both economic and environmental perspectives.

Offshore wind energy offers several advantages for hydrogen production. These include access to water for electrolysis, potentially lower export costs for hydrogen compared to electricity, and the ability to smooth the variability of wind energy through hydrogen storage systems. Proper storage plays a crucial role in addressing the intermittency of wind power, making the hydrogen output more stable. This positions storage not only as an advantage but also as a key step for the successful coupling of offshore wind with hydrogen production. However, challenges remain, particularly regarding the capacity and cost of such storage solutions, alongside the high capital expenditures (CAPEX) and operational costs (OPEX) required for offshore systems.

This research explores the potential of offshore wind farms (OWFs) to contribute to hydrogen production by extending a techno-economic model based on Mixed-Integer Linear Programming (MILP). The model optimizes the number and type of production units, storage locations, and distribution methods, employing an optimization approach to determine the best hydrogen flows between regional hubs . The case study focuses on the Occitanie region in southern France, where hydrogen could be produced offshore from a 30 MW floating wind farm with three turbines located 30 km from the coast and transported via pipelines. Other energy sources may complement offshore wind energy to meet hydrogen supply demands. The study evaluates two scenarios: minimizing hydrogen production costs and minimizing greenhouse gas emissions over a 30-year period, divided into six five-year phases.

Initial findings show that, from an economic standpoint, the Levelized Cost of Hydrogen (LCOH) from offshore wind remains higher compared to traditional hydrogen production methods. However, the Global Warming Potential (GWP) of hydrogen produced from offshore wind ranks it among the most environmentally friendly options. Despite this, the volume of hydrogen produced in the current configuration does not meet the demand required for significant impact in Occitanie's hydrogen market, which points out the need to test higher power levels for the OWF and potential hybridization with other renewable energy sources.

The results underline the importance of future multi-objective optimization methods to better balance the economic and environmental trade-offs and make offshore wind a more competitive option for hydrogen production.

Reference:
Sofía De-León Almaraz, Catherine Azzaro-Pantel, Ludovic Montastruc, Marianne Boix, Deployment of a hydrogen supply chain by multi-objective/multi-period optimisation at regional and national scales, Chemical Engineering Research and Design, Volume 104, 2015, Pages 11-31, https://doi.org/10.1016/j.cherd.2015.07.005.



Robust Techno-economic Analysis, Life Cycle Assessment, and Quality by-Design of Three Alternative Continuous Pharmaceutical Tablet Manufacturing Processes

Shang Gao, Brahim Benyahia

Loughborough University, United Kingdom

This study presents a comprehensive comparison of three key downstream tableting manufacturing methods for pharmaceuticals: i) Dry Granulation (DG) through roller compaction, ii) Direct Compaction (DC), and iii) Wet Granulation (WG). First, integrated mathematical models of these downstream (drug product) processes were developed using gPROMS Formulated Products, along with data from the literature and our recent experimental work. The process models were designed and simulated to reliably capture the impact of different design options, process parameters, and material attributes. Uncertainty analysis was conducted using global sensitivity analysis to identify the critical process parameters (CPPs) and critical material attributes (CMAs) that most significantly influence the quality and performance of the final pharmaceutical tablets. These are captured by the critical quality attributes (CQAs), which include tablet hardness, dissolution rate, impurities/residual solvents, and content uniformity—factors crucial for ensuring product safety and efficacy. Based on the identified CPPs and CMAs, combined design spaces that guarantee the attainment of the targeted CQAs were identified and compared. Additionally, techno-economic analyses were conducted alongside life cycle assessments (LCA) based on the process simulation results and inventory data. The LCA provided an in-depth evaluation of the environmental impacts associated with each manufacturing method, considering aspects such as energy consumption, raw material usage, emissions, and waste generation across a cradle-to-gate approach. By integrating CQAs within the LCA framework, this study offers a holistic analysis that captures both the environmental sustainability and product quality implications of the three tableting processes. The findings aim to guide the selection of more sustainable and efficient manufacturing practices in the pharmaceutical industry, balancing trade-offs between environmental impact and product quality.

Keywords: Dry Granulation, Direct Compaction, Wet Granulation, Life Cycle Assessment (LCA), Techno-economic Analysis (TEA), Quality-by-Design (QbD)

Acknowledgements

The authors acknowledge funding from the UK Engineering and Physical Sciences Research Council (EPSRC), for Made Smarter Innovation – Digital Medicines Manufacturing Research Centre (DM2), EP/V062077/1.



Systematic Model Builder, Model-Based Design of Experiments, and Design Space Identification for A Multistep Pharmaceutical Process

Xuming Yuan, Ashish Yewale, Brahim Benyahia

Loughborough University, United Kingdom

Mathematical models of different processing unit are usually established and optimized individually, even when these processes are meant to be combined in a sequential way in the real world, particularly in continuous operating plants. Although, this traditional approach may help reduce complexity, it may deliver suboptimal solutions or/and overlook the interactions between the unit operations. Most importantly, it can dramatically increase the development time, wastes, and experimental costs inherent to the raw materials, solvents, cleaning, etc. This study aims at developing a systematic approach to establish and optimize integrated mathematical models of interactive multistep processes. This methodology starts with suggesting various model candidates for different unit operations initially based on the prior knowledge. A combination of the model candidates for different unit operations is performed, which gives several candidates of integrated models for the multistep process. A model discrimination based on structural identifiability analysis and model prediction performance (Yuan and Benyahia, 2024) reveals the best integrated model for the multistep process. Afterwards, the refinement of the model, consisting of estimability analysis (Bouchkira and Benyahia, 2023) and model-based design of experiment (MBDoE), is conducted to give the optimal experimental design that guarantees the most information-rich data. With the acquisition of the new experimental data, the reliability and robustness of the multistep mathematical model is dramatically enhanced. The optimized model is subsequently used to identify the design space of the multistep process, which delivers the optimal operating range of the critical process parameters (CPPs) that satisfy the targeted critical quality attributes (CQAs). A blending-tableting process of paracetamol is selected as a case study in this work. The methodology applies the prior knowledge from Kushner and Moore (2010), Nassar et al. (2021) and Puckhaber et al. (2022) to establish model candidates for this two-unit-operation process, where the effects of the lubrication in the blender as well as the composition and porosity of the tablet on the tablet tensile strength are taken into consideration. Model discrimination and model refinement are then performed to identify and improve the optimal integrated model for this two-step process, and the enhanced model is applied for the design space identification under specified CQA targets. The results confirm the effectiveness of the proposed methodology, which demonstrates its potential in achieving higher optimality for the processes involving multiple unit operations.



The role of long-term storage in municipal solid waste treatment systems: Multi-objective resources integration

Julie Dutoit1,2, Jaroslav Hemrle2, François Maréchal1

1École Polytechnique Fédérale de Lausanne (EPFL), Industrial Process Energy Systems Engineering (IPESE), Sion, 1951, Switzerland; 2Kanadevia Inova AG, Zürich, 8005, Switzerland

Estimations for the horizon 2050 predict significant municipal solid waste (MSW) generation increase in every world region, whereas important discrepancies remain between net-zero decarbonization targets of the Paris Agreement and current waste treatment technologies’ environmental performance. This creates an important area of research and development to improve the solutions, especially with regards to circular economy goals for material recovery and transitioning energy supply systems. As shown for plastic chemical recycling by Martínez-Narro et al., 2024, promising technologies may include energy-intensive steps which need integration to renewable energy to be environmentally viable. With growing intra-daily and seasonal variations of power availability due to the increase of renewable production share, Demand Side Response (DSR) measures play a crucial role beside energy storage systems to support power grid stability. In current research, DSR applicability to industrial process models is under-represented relatively to the residential sector, with little attention brought to control strategies or input predictions in system analysis (Bosmann and Eser, 2016, Kirchem et al., 2020).

This contribution presents a framework to evaluate the potential of waste treatment system to shift energy loads for a better integration into energy systems of industrial clusters or residential areas. The waste treatment systems scenarios are modeled, simulated and optimized in a hybrid framework OpenModelica/Python, described by Dutoit et al., 2024. In particular, pinch analysis (Linnhoff and Hindmarsh, 1983) is used for the heat integration assessment. The multi-objective approach relies on key performance indicators including process, economic and environmental impact aspects.

For the case study application, core technologies included are waste sorting, waste incineration and post-combustion amine-based carbon capture, which are integrated to heat and power utilities to satisfy varying external demand from the power grid and the district heating network. The heterogeneous modeling of the waste flows allows to define several design options on the material recovery facility for waste plastic fraction sorting, and scenarios are simulated to evaluate the system performance under the described control strategies. Results provide insights for optimal system operations and integration from an industrial perspective.

References

Bosmann, T., & Eser, E. J. (2016). Model-based assessment of demand-response measures – A comprehensive literature review. Renewable and Sustainable Energy Reviews, 57, 1637–1656. https://doi.org/10.1016/j.rser.2015.12.031.

Dutoit, J., Hemrle, J., Maréchal, F. (2024). Supporting Life-Cycle Impact Assessment Transparency in Waste Treatment Systems Simulation: A Decision-Support Methodology. In preparation.

Kirchem, D., Lynch, M. A., Bertsch, V., & Casey, E. (2020). Modelling demand response with process models and energy systems models: Potential applications for wastewater treatment within the energy-water nexus. Applied Energy, 260, 114321. https://doi.org/10.1016/j.apenergy.2019.114321

Linnhoff, B., & Hindmarsh, E. (1983). The pinch design method for heat exchanger networks. Chemical Engineering Science, 38(5), 745–763. https://doi.org/10.1016/0009-2509(83)80185-7

Martínez-Narro, G., Hassan, S., N. Phan, A. (2024). Chemical recycling of plastic waste for sustainable polymer manufacturing – A critical review. Journal of Environmental Chemical Engineering, 12, 112323. https://doi.org/10.1016/j.jece.2024.112323.



A Comparative Study of Feature Importance in Process Data: Neural Networks vs. Human Visual Attention

Rohit Suresh1, Babji Srinivasan1,3, Rajagopalan Srinivasan2,3

1Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology Madras, Chennai 600036, India; 2Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai 600036, India; 3American Express Lab for Data Analytics, Risk and Technology Indian Institute of Technology Madras, Chennai 600036, India

Artificial Intelligence(AI) and Automation technologies have revolutionized the way many sectors operate. Specifically in process industries and power plants, there is a lot of scope of enhancing production and efficiency with AI through predictive maintenance, condition monitoring, inspection and quality control etc. However, despite these advancements, human operators are the final decision-makers in such major safety-critical systems. Fostering collaboration between human operators and AI systems is the inevitable next step forward. A primary step towards achieving this goal is to capture the representation of information acquired by both human operators and AI-based systems in a mutually comprehensible way. This would aid in understanding their rationale behind the decision. AI-based systems with deep networks and complex architecture often achieve the best results. However, they are often disregarded by human operators due to lack of transparency. While eXplainable AI(XAI) is an active research area that attempts to comprehend the deep networks, understanding the human rationale behind decision-making is largely overlooked.

Several popular XAI techniques such as local interpretable model-agnostic explanations(LIME), and Gradient-Weighted Class Activation Mapping(Grad-CAM) provide explanations via feature attribution. In the context of process monitoring, Bahkte et al. (2022) used shapeley value framework with integrated gradients to estimate the marginal contribution of process variables in fault classification. One popular way to evaluate the explanations provided by various XAI algorithm is through human eye gaze tracking. Human participants’ visual attention over the stimus is estimated using eye tracking which is compared with the results of XAI.

Eye tracking also has the potential to characterise the mental models of control room operators during different experimental scenarios(Shahab et al., 2022). In this work, participants, acting as control room operators were given tasks of disturbance rejection based on alarm signals and process variable trends in HMI. Extending that in this work we attempt to explain the human operator’s rationale behind the decision making through eye tracking. Participants’ dynamic attention allocation over the stimulus is objectively captured using various eye gaze metrics which are further used to extract the major causal factors that influenced the decision of participants. The effectiveness of the method is demonstrated with a case study. We conduct eye tracking experiments where participants are required to identify the fault in the process. During the experiment the images of trend panel with trajectories of all major process variables captured at a specific instant are shown to the participants. The process variable responsible for the fault is objectively identified using operator knowledge. Our future work will focus on integrating this human rationale with XAI which will pave the way for human-machine teaming.

References:
Bhakte, A., Pakkiriswamy, V. and Srinivasan, R., 2022. An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks. Chemical Engineering Science, 250, p.117373.
Shahab, M.A., Iqbal, M.U., Srinivasan, B. and Srinivasan, R., 2022. HMM-based models of control room operator's cognition during process abnormalities. 1. Formalism and model identification. Journal of Loss Prevention in the Process Industries, 76, p.104748.



Parameter Estimation and Model Comparison for Mixed Substrate Biomass Fermentation

Tom Vinestock, Miao Guo

King's College London, United Kingdom

Single cell protein (SCP) fermentation is an effective way of transforming carbohydrate-rich substrates into high-protein foodstuffs and is more sustainable than conventional animal-based protein production [1]. However, whereas cows and other ruminants can be fed agricultural residues, such as rice straw, SCP fermentations generally depend on high-purity single substrate feedstocks as a carbon-source, such as starch-derived glucose, which are expensive, and directly compete for food crops.

Consequently, there is interest in switching to feedstocks derived from cheap agricultural lignocellulosic residues. However, treatment of such lignocellulosic residues produces a mixed feedstock, typically containing both glucose and xylose [2]. Accurate models of mixed-substrate growth are needed for fermentation decision-makers to understand the trade-offs associated with transitioning to the more sustainable lignocellulosic feedstocks. Such models are also a prerequisite for optimizing the operation and control of mixed-substrate fermentations.

In this work, recently published biomass and substrate concentration time-series data, for glucose-xylose batch fermentation of F. Venenatum [3] is used to estimate parameters for different unstructured models of diauxic growth. A Bayesian optimisation methodology is employed to identify the best parameters in each case. A novel model for diauxic growth with substrate cross-inhibition, mediated by variable enzyme production, is proposed, based on Nakamura et al. [4], but simplified to reduce the number of states and parameters, and hence improve identifiability and reduce overfitting. This model is shown to have a lower error on both the calibration dataset and the validation dataset, than the model in Banks et al. [3], itself based on work by Vega-Ramon et al. [5], which models substrate cross-inhibition effects directly. The performance of the model proposed by Kompala and Ramkrishna [6], based on growth-optimised cellular resource allocation, is also evaluated.

This work could lead to improved modelling of mixed substrate fermentation, and therefore help address the technical barriers to wider-scale use of lignocellulose-derived feedstocks in fermentation. Future research could test the generalisability of the diauxic growth models considered using data from a continuous or fed-batch mixed substrate fermentation.

References

[1] Good Food Institute, “Fermentation: State of the industry report,” 2021.

[2] L. Qin, L. Liu, A.P. Zeng, and D. Wei, “From low-cost substrates to single cell oils synthesized by oleaginous yeasts,” Bioresource Technology, Dec. 2017.

[3] M. Banks, M. Taylor, and M. Guo, “High throughput parameter estimation and uncertainty analysis applied to the production of mycoprotein from synthetic lignocellulosic hydrolysates,” 2024.

[4] Y. Nakamura, T. Sawada, F. Kobayashi, M. Ohnaga, and M. Suzuki, “Stability analysis of continuous culture in diauxic growth,” Journal of Fermentation and Bioengineering, 1996.

[5] F. Vega-Ramon, X. Zhu, T. R. Savage, P. Petsagkourakis, K. Jing, and D. Zhang, “Kinetic and hybrid modeling for yeast astaxanthin production under uncertainty,” Biotechnology and Bioengineering, Dec. 2021.

[6] D. S. Kompala, D. Ramkrishna, N. B. Jansen, and G. T. Tsao, “Investigation of bacterial growth on mixed substrates: Experimental evaluation of cybernetic models,” Biotechnology and Bioengineering, July 1986.



Identification of Suitable Operational Conditions and Dimensions for Supersonic Water Separation in Exhaust Gases from Offshore Turbines: A Case Study

Jonatas de Oliveira Souza Cavalcante1, Marcelo da Costa Amaral2, Fernando Luiz Pellegrini Pessoa1,3

1SENAI CIMATEC University Center, Brazil; 2Leopoldo Américo Miguez de Mello Research Center (CENPES); 3Federal University of Rio de Janeiro (UFRJ)

Due to space, weight, and energy efficiency constraints in offshore environments, the efficient removal of water from turbine exhaust gases is a crucial step to optimize operational performance in gas treatment processes. In this context, replacing conventional methods, such as molecular sieves, with supersonic separators emerges as a promising alternative. This work aims to determine the optimal operational conditions and dimensions for supersonic water separation in turbine exhaust gases on offshore platforms. The simulation was conducted using a unit operation extension in Aspen HYSYS, based on the compositions of exhaust gases from Brazilian pre-salt wells. Parameters such as operational conditions, separator dimensions, and shock Mach were optimized to maximize process efficiency and minimize separator size. The results indicated the near-complete removal of water, demonstrating that supersonic separation technology, in addition to being compact, offers a viable and efficient alternative for water removal from exhaust gases, particularly in space-constrained environments.



On optimal hydrogen production pathway selection using the SECA multi-criteria decision-making method

Caroline Kaitano, Thokozani Majozi

University of the Witwatersrand, South Africa

The increasing global population has resulted in the scramble for more energy. Hydrogen offers a new revolution to energy systems worldwide. Considering its numerous uses, research interest has grown to seek sustainable production methods. However, hydrogen production must satisfy three factors, i.e. energy security, energy equity, and environmental sustainability, referred to as the energy trilemma. Therefore, this study seeks to investigate the sustainability of hydrogen production pathways through the use of a Multi-Criteria Decision- Making model. In particular, a modified Simultaneous Evaluation of Criteria and Alternatives (SECA) model was employed for the prioritization of 19 options for hydrogen production. This model simultaneously determines the overall performance scores of the 19 options and the objective weights for the energy trilemma in a South African context. The results obtained from this study showed that environmental sustainability has a higher objective weight value of 0.37 followed by energy security with a value of 0.32 and energy equity being the least with 0.31. Of the 19 options selected, steam reforming of methane with carbon capture and storage was found to have the highest overall performance score, considering the trade-offs in the energy trilemma. This was followed by steam reforming of methane without carbon capture and storage and the autothermal reforming of methane with carbon capture and storage. The results obtained in this study will potentially pave the way for optimally producing hydrogen from different feedstocks while considering the energy trilemma and serve as a basis for further research in sustainable process engineering.



On the role of Artificial Intelligence in Feature oriented Multi-Criteria Decision Analysis

Heyuan Liu1,2, Yi Zhao1, Francois Marechal1

1Industrial Process and Energy Systems Engineering (IPESE), Ecole Polytechnique Fédérale de Lausanne, Sion, Switzerland; 2École Polytechnique, France

In industrial applications, balancing economic and environmental goals is crucial amidst challenges like climate change. To address conflicting objectives, tools like multi-objective optimization (MOO) and multi-criteria decision analysis (MCDA) are utilized. MOO generates a range of viable solutions, while MCDA helps select the most suitable option considering factors like profitability, environmental impact, safety, and efficiency. These tools aid in making informed decisions amidst complex trade-offs and uncertainties.

In this study, we propose a novel approach for MCDA using advanced machine learning techniques and applied the method to analyze the decarbonization solutions to a typical European refinery. First, a hybrid dimensionality reduction method combining AutoEncoders and Principal Component Analysis (PCA) is developed to reduce high-dimensional data while retaining key features. The effectiveness of dimensionality reduction is demonstrated by clustering the reduced data and mapping the clusters back to the original high-dimensional space. The high clustering quality scores indicate that spatial distribution characteristics were well preserved. Furthermore, Geometric analysis techniques, such as Intrinsic Shape Signatures (ISS), Harris Corner Detection, and Mesh Saliency, further refine the identification of typical configurations. Specifically, 15 typical solutions identified by the ISS method are used as baselines to represent distinct regions in the solution space. These solutions serve as a reference set for further comparison.

Building upon this reference set, we utilize Large Language Models (LLMs) to further enhance the decision-making process. First, LLMs are employed to generate and refine ranking criteria for evaluating the identified solutions. We employ LLM with an iterative self-update mechanism to dynamically adjust weighting strategies, enhancing decision-making capabilities in complex environments. To address input size limitations encountered in the problem, we apply heuristic design approaches that effectively manage and optimize the information. Additionally, effective prompt engineering techniques are integrated to improve the model's reasoning and adaptability.

In addition to ranking, LLM technology provides comprehensive and interpretable explanations for the selected solutions. This includes breaking down the criteria used for each decision, clarifying trade-offs between competing objectives, and offering insights into how different configurations perform across various key performance indicators. These explanations help stakeholders better understand the rationale behind the chosen solutions, enabling more informed decision-making in practical applications.



Optimizing CO2 Utilization in Reverse Water-Gas Shift Membrane Reactors with Parametric PINNs

Zahir Aghayev1,2, Zhaofeng Li3, Michael Patrascu3, Burcu Beykal1,2

1Department of Chemical & Biomolecular Engineering, University of Connecticut, Storrs, CT 06269, USA; 2Center for Clean Energy Engineering, University of Connecticut, Storrs, CT 06269, USA; 3The Wolfson Department of Chemical Engineering, Technion – Israel Institute of Technology, Haifa 3200003, Israel

With atmospheric CO₂ levels reaching a record 426.91 ppm in June 2024, the urgency for innovative carbon capture and utilization (CCU) strategies to reduce emissions and repurpose CO₂ into valuable products has become even more critical [1]. One promising solution is the reverse water-gas shift (RWGS) reaction, which transforms CO₂ and hydrogen—produced through renewable energy-powered electrolysis—into carbon monoxide, a key precursor for synthesizing fuels and chemicals. By integrating membrane reactors that selectively remove water vapor, the process shifts the equilibrium forward, resulting in higher CO₂ conversion and CO yield at lower temperatures, in accordance with the Le Chatelier's principle. However, modeling this intensified system remains challenging due to the complex, nonlinear interaction between reaction kinetics and membrane transport.

In this study, we developed a physics-informed neural network (PINN) model that integrates first-principles physics with machine learning to model the RWGS process within a membrane reactor. This approach embeds governing physical laws into the network's architecture, reducing the computational burden typically associated with solving highly nonlinear ordinary differential equations (ODEs), while maintaining both accuracy and interpretability [2]. Our model demonstrated robust predictive performance, achieving an R² value exceeding 0.95, successfully capturing flow rate changes and reaction dynamics along the reactor length. Using this validated PINN model, we performed data-driven optimization, identifying operational conditions that maximized CO₂ conversion efficiency and reaction yield [3-6]. This hybrid modeling approach enhances prediction accuracy and optimizes the reactor conditions, offering a scalable solution for industries integrating renewable energy into chemical production and reducing carbon emissions. Our findings demonstrate the potential of advanced modeling to intensify CO₂ utilization processes, with significant implications for sustainable chemical production and energy systems.

References

  1. NOAA Global Monitoring Laboratory. (2024). Trends in atmospheric carbon dioxide [online]. Available at: https://gml.noaa.gov/ccgg/trends/ [Accessed 10/13/2024].
  2. Raissi, M., Perdikaris, P. and Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378, pp.686-707.
  3. Beykal, B. and Pistikopoulos, E.N., 2024. Data-driven optimization algorithms. In Artificial Intelligence in Manufacturing (pp. 135-180). Academic Press.
  4. Boukouvala, F. and Floudas, C.A., 2017. ARGONAUT: AlgoRithms for Global Optimization of coNstrAined grey-box compUTational problems. Optimization Letters, 11, pp.895-913.
  5. Beykal, B., Aghayev, Z., Onel, O., Onel, M. and Pistikopoulos, E.N., 2022. Data-driven Stochastic Optimization of Numerically Infeasible Differential Algebraic Equations: An Application to the Steam Cracking Process. In Computer Aided Chemical Engineering (Vol. 49, pp. 1579-1584). Elsevier.
  6. Aghayev, Z., Voulanas, D., Gildin, E., Beykal, B., 2024. Surrogate-Assisted Optimization of Highly Constrained Oil Recovery Processes Using Classification-Based Constraint Modeling. Industrial & Engineering Chemistry Research (submitted).


Modeling, simulation and optimization of a carbon capture process through a fluidized TSA column

Eduardo dos Santos Funcia1, Yuri Souza Beleli1, Enrique Vilarrasa Garcia2, Marcelo Martins Seckler1, José Luís de Paiva1, Galo AC Le Roux1

1Polytechnic School of the University of Sao Paulo, Brazil; 2Federal University of Ceará, Brazil

Carbon capture technologies have recently emerged as a way to mitigate climate change and global warming by removing carbon dioxide from the atmosphere. Furthermore, by removing carbon dioxide from biomass-originated flue gases, an energy process with negative carbon footprint can be achieved. Among carbon capture processes, the fluidized temperature swing adsorption (TSA) columns are a promising low-pressure alternative, where carbon dioxide flowing upwards is exothermally adsorbed into a fluidized solid sorbent flowing downwards, and later endothermically extracted through higher temperatures while regenerating the sorbent for recirculation. Although an interesting venture, the TSA process has been developed only in small scale, and remains to be scaled-up to become an industrial reality. This work aims to model, simulate and optimize a TSA multi-stage equilibrium system in order to obtain a conceptual design for future process scale up. A mathematical model was developed for adsorption using an approach that makes it easy to extend the model to various configurations. The model was extended to include multiple stages, each with a heat exchanger, and was also coupled to the desorption operation. Each column, adsorption and desorption, includes one external heat exchanger at the bottom for a preliminary heat load of the inward gas flow. The system also included a heat exchanger in the recirculating solid sorbent flow, before the regenerated solid enters the top of the adsorption column. The model is based on molar and energy balances, coupled to pressure drops in a fluidized bed designed to operate close to the minimum fluidization velocity (calculated through semi-empirical correlations), and to thermodynamics of adsorption equilibrium of a mixture of carbon dioxide and nitrogen in solid sorbents. The Toth Equilibrium isotherm was used with parameters experimentally obtained in a previous work (which suggested that the heterogeneity parameter for nitrogen should be fixed at unity). The complete fluidized TSA adsorption/desorption process has been optimized to minimize energy, adsorbent and operating costs, as well as equipment investment and installing, considering equilibrium in each fluidized bed stage. The optimal configuration for heat exchangers is determined and a unit cost for carbon dioxide capture was estimated. It was found that 2 stages are sufficient for an effective removal of carbon dioxide in the adsorption column, while at least 5 stages are necessary to meet captured carbon specification at 95% molar purity. It was also possible to conclude that not all stages in the columns needed heat exchangers, with some heat loads being set at 0 during the optimization. Pressure drop for each stage was estimated as smaller than 0.072 bar for a bed 1 m high, and air velocity was 40-45 cm/s (minimum fluidization velocity was 10-11 cm/s), with low particle Reynolds numbers of about 17, which indicates the system readily fluidizes. These findings show that the methodology here developed is useful for guiding the conceptual design of fluidized TSA process for carbon capture.



Unlocking Process Dynamics: Interpretable PDE Solutions via Symbolic Regression

Benjamin G. Cohen, Burcu Beykal, George M. Bollas

University of Connecticut, USA

Physics-informed symbolic regression (PISR) offers an innovative approach to automatically learn explicit, analytical approximate solutions to partial differential equations (PDEs). Chemical processes often involve dynamics that PDEs can effectively capture, providing valuable insights for engineers and scientists to improve process design and control. Traditionally, solving PDEs requires expertise in analytical methods or costly numerical schemes. However, with the advent of AI/ML, tools like physics-informed neural networks (PINNs) have emerged, learning solutions to PDEs by constraining neural networks to satisfy differential equations and boundary information. Applying PINNs in safety-critical systems is challenging due to the many neural network parameters and black-box nature.

To address these challenges, we explore the effect of replacing the neural network in PINNs with a symbolic regressor to create PISR. Guided by a carefully selected information-theoretic loss function that balances model agreement with differential equations and boundary information against identifiability, PISR can learn approximate solutions to PDEs that are symbolic rather than neural network approximations. This approach yields concise, clear analytical approximate solutions that balance model complexity and fit quality. Using an open-source symbolic regression package in Julia, we demonstrate PISR’s efficacy by learning approximate solutions to several PDEs common in process engineering and compare the learned representations to those obtained using PINNs. The PISR models, when compared to the PINN models, are straightforward, easy to understand, and contain very few parameters, making them ideal for sensitivity analysis and ensuring robust process design and control.



Eco-Designing Pharmaceutical Supply Chains: A Process Engineering Approach to Life Cycle Inventory Generation

Indra CASTRO VIVAR1, Catherine AZARO-PANTEL1, Alberto A. AGUILAR LASSERRE2, Fernando MORALES-MENDOZA3

1Laboratoire de Génie Chimique, Université de Toulouse, CNRS, INPT, UPS, Toulouse, France; 2Tecnologico Nacional de México, Instituto Tecnológico de Orizaba, México; 3Universidad Autónoma de Yucatán, Facultad de Ingeniería Química, Mérida, Yucatán, México

The environmental impact of Active Pharmaceutical Ingredients (APIs) is an increasingly significant research focus, as global pharmaceutical manufacturing practices face heightened scrutiny regarding sustainability. Paracetamol (acetaminophen), one of the most extensively used APIs, requires closer examination due to its current production practices. Most paracetamol is manufactured in large-scale facilities in India and China, with production capacities ranging from 2,000 to 40,000 tons annually.

Offshoring pharmaceutical manufacturing, traditionally a cost-saving strategy, has increased supply chain complexity and dependency on foreign API sources. This reliance has made Europe’s pharmaceutical production vulnerable, especially during global crises or geopolitical tensions, such as the disruptions seen during the COVID-19 pandemic. Consequently, there is growing interest in reshoring pharmaceutical production to Europe. The European Pharmaceutical Strategy (2020)[1] advocates decentralizing production to create shorter, more sustainable value chains. This move seeks to enhance access to high-quality medicines while minimizing the environmental impacts of long-distance transportation.

In France, the government has introduced measures to relocate the production of 50 essential drugs as part of a re-industrialization plan to address medication shortages. Paracetamol sales were restricted in 2022 and early 2023 due to supply chain issues, leading to the relocation of several manufacturing plants.

Yet, pharmaceuticals present unique challenges when assessed using Life Cycle Assessment (LCA), mainly due to a lack of comprehensive life cycle inventory (LCI) data. This scarcity is particularly evident for API synthesis (upstream) and downstream phases such as usage and end-of-life management.

This study aims to apply LCA methodology to evaluate various paracetamol API supply chain scenarios, focusing on the potential benefits of reshoring production to France. A major contribution of this work is the generation of LCI data for paracetamol production through process engineering and chemical process modeling. Aspen Plus software was used to model the paracetamol API manufacturing process, including mass and energy balances. This approach ensures that the datasets generated are robust and validated against available reference data. SimaPro software was used to conduct the LCA using the EcoInvent database and the Environmental Footprint (EF) impact assessment method.

One key finding is the reduction of greenhouse gas emissions for the selected functional unit (FU) of 1 kg of API. Significant differences in electricity use and steam heat generation were observed. According to the EF database, electricity in India results in emissions of 83 g CO₂ eq, while steam heat generation emits 1.38 kg CO₂ eq per FU. In contrast, French emissions are significantly lower, with electricity contributing 5 g CO₂ eq and steam heat generating 1.18 kg CO₂ eq per FU. These results highlight the environmental advantages of relocating production to regions with decarbonized power grids.

This study underscores the value of process modeling in generating LCI data for pharmaceuticals and enhances the understanding of the environmental benefits of reshoring paracetamol manufacturing. The developed methodology can be applied to other chemicals with limited LCI data, supporting more sustainable decision-making in the pharmaceutical sector's eco-design, particularly during re-industrialization efforts.

[1] European Commission Communication from the Commission: A New Industrial Strategy for Europe, vol.102,COM(2020), pp.1-17



Safe Reinforcement Learning with Lyapunov-Based Constraints for Control of an Unstable Reactor

José Rodrigues Torraca Neto1, Argimiro Resende Secchi1,2, Bruno Didier Olivier Capron1, Antonio del-Rio Chanona3

1Chemical and Biochemical Process Engineering Program/School of Chemistry, Universidade Federal do Rio de Janeiro (UFRJ), Brazil; 2Chemical Engineering Program/COPPE, Universidade Federal do Rio de Janeiro (UFRJ), Brazil; 3Sargent Centre for Process Systems Engineering, Imperial College London

Safe reinforcement learning (RL) is essential for real-world applications with uncertainty and safety constraints, such as autonomous robotics and chemical reactors. Recent advances (Brunke et al., 2022) focus on integrating control theory with RL to ensure safety during learning and deployment. These approaches include robust RL frameworks, constrained Markov decision processes (CMDPs), and safe exploration strategies. We propose a novel approach where RL algorithms—PPO (Schulman et al., 2017), SAC (Haarnoja et al., 2018), DDPG (Lillicrap et al., 2016), and TD3 (Fujimoto et al., 2018)—are trained with Lyapunov-based constraints to ensure stability. As our reward function, −(x-xSP)², inherently generates negative rewards, we applied penalties to positive critic values and decreases in critic estimates over time.

For off-policy algorithms (SAC, DDPG, TD3), penalties were applied directly to Q-values, discouraging non-negative values and preventing unexpected decreases. On-policy algorithms (PPO) applied these penalties directly to the value function. DDPG used Ornstein-Uhlenbeck noise for exploration, while TD3 used Gaussian noise, with optimized parameters. Hyperparameters, including safe RL constraints, were tuned using Optuna (Akiba et al., 2019), optimizing learning rates, network architectures, and penalty weights.

Our method was tested on an unstable Continuous Stirred Tank Reactor (CSTR) under random disturbances. Despite challenges posed by disturbances, the Safe RL approach was evaluated for resilience under dynamic conditions. A cosine annealing schedule dynamically adjusted learning rates, ensuring stable training. Base RL algorithms (without safety constraints) were trained on ten parallel environments with disturbances and compared to a Nonlinear Model Predictive Control (NMPC) benchmark. SAC performed best, achieving an optimality gap of 7.73×10⁻⁴ on the training pool and 3.65×10⁻⁴ on new disturbances. DDPG and TD3 exhibited instability due to temperature spikes without safety constraints.

Safe RL significantly improved SAC’s performance, reducing the optimality gap to 2.88×10⁻⁴ on the training pool and 2.36×10⁻⁴ on new disturbances, nearing NMPC performance. Safe RL also reduced instability in DDPG and TD3, preventing temperature spikes and reducing policy noise, though it increased offset from the setpoint, resulting in larger optimality gaps. Despite this tradeoff, Safe RL made these algorithms more reliable, considering unseen disturbances. Overall, Safe RL brought SAC close to optimality across disturbance conditions while it mitigated instability in DDPG and TD3 at the cost of higher setpoint offsets.

References:
L. Brunke et al., 2022, "Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning," Annual Review of Control, Robotics, and Autonomous Systems, Vol. 5, pp. 411–444.
J. Schulman et al., 2017, "Proximal Policy Optimization Algorithms," arXiv:1707.06347.
T. Haarnoja et al., 2018, "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor," Proceedings of the 35th ICML, Vol. 80, pp. 1861-1870.
T.P. Lillicrap et al., 2016, "Continuous Control with Deep Reinforcement Learning," arXiv:1509.02971.
S. Fujimoto et al., 2018, "Addressing Function Approximation Error in Actor-Critic Methods," Proceedings of the 35th ICML, Vol. 80, pp. 1587-1596.
T. Akiba et al., 2019, "Optuna: A Next-generation Hyperparameter Optimization Framework," Proceedings of the 25th ACM SIGKDD, pp. 2623-2631.



A two-level model to assess the economic feasibility of renewable urea production from agricultural wastes

Diego Costa Lopes, Moisés Teles Dos Santos

Universidade de São Paulo, Brazil

Agroindustrial wastes can be an abundant source of chemicals, biofuels and energy. Based on this assumption, this work presents a tow-level modeling (process models and supply chain model) and an optimization framework for an integrated biorefinery system to convert agricultural residues into renewable urea via gasification routes with a possible additional hydrogen input from electrolysis. A process model of the gasification process was developed in Aspen Plus® to identify key performance indicators such as energy consumption and relative yields for urea for different biomasses and operating conditions; then, these key process data were used in a mixed-integer linear programming (MILP) model, designed to identify the optimal combination of energy source, technological route of urea production and plant location that maximizes the net present value of the system. The gasification step was modeled with an equilibrium approach. Besides the gasifier, the plant is comprised of a conditioning system to adjust syngas composition, CO2 capture, ammonia and urea synthesis.

Based on the model’s results, three technological routes (oxygen gasification, air gasification and water electrolysis) were chosen as the most promising, and 6 different biomasses (rice husks, coffee husks, corn stover, soybean straw, sugarcane straw and bagasse) were identified as representative of the Brazilian agricultural scenario. The country was divided into 5569 cities and 558 micro-regions. Each region's agricultural production was evaluated to estimate biomass supply and urea demand. Electricity prices were also considered based on current tariffs. A MILP model was developed to maximize the net present value, combining energy sources, location and route as decision variables, respecting constraints on biomass supply, urea demand and transport between regions. The model was applied to the whole country divided in the microregion level. It was found that the Assis microregion in the São Paulo state is the optimal location for the plant, leveraging the proximity of large sugarcane and soybean crops and cheaper electricity prices compared to the rest of the country, with a positive NPV for an 80 tons of urea / h plant. Biomass dominates the total costs of plant (60%), followed by power (25%) and urea transport (10%). Biomass supplies were not found to be a major constraint in any region; urea demand is the main limiting factor, with more than 30 microregions needed to consume the plant’s production, highlighting the need for close proximity between production and consumption to minimize logistic costs.

The model was also constrained to other regions of Brazil to evaluate local feasibility. The north and northeast regions were not found to be viable locations for a plant with NPVs close to 0, given the lower biomass supplies and urea demands, and larger distances between microregions. Meanwhile, in the southern and midwest regions, large availability of soybean residues also create good conditions for a renewable urea plant, with NPVs of US$ 105 mil and US$ 103 mil respectively. The results indicate the feasibility of producing renewable urea from agricultural wastes and the importance of considering a two-level approach to assess the economic performance of the entire system.



Computer-based Chemical Engineering Education for Green and Digital Transformation

Zorka Novak Pintarič, Miloš Bogataj, Zdravko Kravanja

Faculty of Chemistry and Chemical Engineering, University of Maribor, Smetanova ulica 17, SI-2000 Maribor, Slovenia

The mission of Chemical Engineering Education, particularly Computer-Aided Chemical Engineering, is to equip students with the knowledge and skills they need to drive the green and digital transformation. This involves integrating Chemical Engineering and Process Systems Engineering (PSE) within the Bologna 3-cycle study system. The EFCE Bologna recommendations for Chemical Engineering programs will be reviewed, with a focus on PSE topics, especially those relevant to the green and digital transformation. Key challenges in introducing sustainability and digital knowledge will be highlighted, along with the necessary development of teaching methods and tools.

While chemical engineering programs contain elements of green and digital engineering, their systematic integration into core subjects is limited. The analysis of our study program shows that only a few principles of green engineering, such as maximizing efficiency and energy flow integration, are explicitly addressed. Other principles are indirectly presented through case studies but lack structured inclusion. Digital skills in the current curricula focus mainly on spreadsheets for data processing, basic programming, and process simulation. Green and digital content is more extensively explored in project work and advanced studies, with elective courses and final theses offering deeper engagement.

Artificial Intelligence (AI), as a critical element of digitalization, will significantly impact chemical engineering education, influencing both teaching methods and process optimization. However, the interdisciplinary complexity of AI poses challenges. Students need a solid foundation in programming, data science, and statistics to master AI tools, making its gradual introduction essential. The question therefore arises as to how AI can be effectively integrated into chemical engineering education by striking a balance between technical skills and critical thinking, fostering creativity and ethical awareness while preserving and not sacrificing the engineering fundamentals.

Given the rapid pace of change in the industry, chemical engineering education needs to be reformed, particularly at the bachelor's and master's levels. Core challenges include systematically integrating essential green and digital topics into syllabi, introducing new courses like AI and data science, modernizing textbooks with numerical examples, and providing teachers with training to keep pace with technological advancements.



Development of a Hybrid Model for the Paracetamol Batch Dissolution in Ethanol Using Universal Differential Equations

Fernando Arrais Romero Dias Lima1,2, Amyr Crissaff Silva1, Marcellus G. F. de Moraes3,4, Amaro G. Barreto Jr.1, Argimiro R. Secchi1,4, Idelfonso Nogueira2, Maurício B. de Souza Jr.1,4

1School of Chemistry, EPQB, Universidade Federal do Rio de Janeiro, Av. Horácio Macedo, 2030, CT, Bloco E, 21941-914, Rio de Janeiro, RJ – Brazil; 2Chemical Engineering Department, Norwegian University of Science and Technology, Trondheim, 793101, Norway; 3Instituto de Química, Rio de Janeiro State University (UERJ), Rua São Francisco Xavier, 524, Maracanã, Rio de Janeiro, RJ, 20550-900, Brazil; 4PEQ/COPPE – Universidade Federal do Rio de Janeiro, Av. Horácio Macedo, 2030, CT, Bloco G, G115, 21941-914, Rio de Janeiro, RJ – Brazil

Crystallization is a relevant process in the pharmaceutical industry for product purification and particle production. An efficient crystallization is characterized by crystals produced with the desired attributes. Therefore, modeling this process is a key point to achieving this goal. In this sense, the objective of this work is to propose a hybrid model to describe paracetamol dissolution in ethanol. The universal differential equations methodology is considered in the development of this model, using a neural network to predict the dissolution rate combined with the population balance equations to calculate the moments of the crystal size distribution (CSD) and the concentration. The model was developed using the experimental batches developed by Kim et al. [1]. The dataset is composed of concentration measurements obtained using attenuated total reflectance-Fourier transform infrared (ATR-FTIR). The objective function of the optimization problem is to minimize the difference between the experimental and the predicted concentration. The hybrid model could efficiently predict the concentration compared to the experimental measurements. Moreover, the hybrid approach made predictions of the moments of the CSD similar to the population balance model proposed by Kim et al. [1], being able to successfully calculate batches not considered in the training dataset. Moreover, the performance of the hybrid model was similar to the phenomenological one based on population balance but without the necessity of accounting for solubility information. Therefore, the universal differential equations approach is presented an efficient methodology for modeling crystallization processes with limited information.

1. Kim, Y., Kawajiri, Y., Rousseau, R.W., Grover, M.A., 2023. Modeling of nucleation, growth, and dissolution of paracetamol in ethanol solution for unseeded batch cooling crystallization with temperature-cycling strategy. Industrial & Engineering Chemistry Research 62, 2866–2881.



Novel PSE applications and knowledge transfer in joint industry - university energy-related postgraduate education

Athanasios Stefanakis2, Dimitra Kolokotsa2, Evaggelos Kapartzianis2, Ioannis Bonis2, Emilia Kondili1, JOHN K. KALDELLIS3

1Optimisation of Production Systems Lab., University of West Attica; 2Hellenic Energy S.A,; 3Soft Energy Applications and Environmental Protection Lab., University of West Attica

The area of process systems engineering has historically a profound theoretical involvement (noting especially Stephanopoulos 1985 but also Himmelblau 1993) who were testing the new ideas of all forms of artificial intelligence known today as such. While doing so the computer hardware but also the available data were not in the capacity required by these models.

The situation today with large amounts of data available in the industry and highly available cloud computing has been essential in making sense of broad machine learning models’ applications. In the area of process systems engineering the type of problems currently potentially solved with machine learning routines are

  1. The control system, or in terms of companies' equipment the Distributed Control systems implemented in servers with real-time OS. Predictive algorithms with millions of coefficients (or less but more robust lately) are better in addressing larger systems than simple pieces of equipment, for example with Neural Networks and Deep Learning. The plant wide optimization has not yet happened, but the Supply Chain Optimization is an area which is already seen applications and it is studied in the Academia.
  2. The Process Safety System (also known as Interlock system or emergency shutdown system ) implemented in PLCs has also been augmented by ML with the fault prediction and diagnosis method. Most applied in rotating machine performance (asset performance management systems) it is predicting their failure in advance so that companies can take on-time measures minimizing the risks of accidents but also production loss (predictive maintenance).

The subject has three challenges to be taught:

  1. Process systems engineering subject. This subject requires model understanding which already is not an easy subject.
  2. Machine learning subject. It is also requiring a modeling understanding but at the same time it is not a core subject in PSE
  3. Data engineering subject. As the systems become larger (soon they will be plantwide) knowledge of databases and cloud operating systems is becoming at least required to understand the structure of the models to be used.

These subjects have not a similar language not even close and are three separate frames of knowledge. Re-framing of the PSE is required to include in its core all three new disciplines and this is required to be done faster than in the past. The potential of young generations is enormous as they learn "hands-on" but for the older this is already overwhelming.

For next period the Machine Learning is evolving in the form of Plant optimizers and Fault detection and diagnosis model.

The present article will present the continuous evolution and progress of the cooperation between the largest energy company of Greece and the University in the implementation of knowledge transfer and advanced postgraduate and doctoral education courses. Furthermore, the novel ideas of AI implementation in the process industry as mentioned above will also be described and the prospects of this inspiration for both, the industry and the university will be highlighted.



Optimal Operation of Middle Vessel Batch Distillation using an Economic MPC

Surendra Beniwal, Sujit Jogwar

IIT Bombay, India

Middle vessel batch distillation (MVBD) is an alternative configuration of the conventional batch distillation with improved sustainability index. MVBD consists of two column sections separated by a (middle) vessel for the separation of a ternary mixture. It works on the principle of multi-effect operation wherein vapour from one column section (effect) is used to drive the subsequent effect, thus reducing the overall energy consumption [1]. The entire system is operated under total reflux and at the end of the batch, the three products are accumulated in the three vessels (reflux drum, middle vessel and reboiler).

Previously, it is shown that the performance of the MVBD can be quantified in terms of overall performance index (OPI) which captures the trade-off between separation and energy efficiency [2]. Furthermore, during the operation, holdup trajectory of each vessel can be manipulated to maximize OPI. In order to track these optimal holdup profiles during a batch, a feedback control system is needed.

The present work compares two approaches; sequential (open-loop optimization + closed-loop control) and simultaneous (closed-loop optimization + control). In the sequential approach, optimal set-point trajectory generated with offline optimization is tracked using a model predictive controller (MPC). Alternatively, in the simultaneous approach, OPI maximization is used as an objective function for the controller. This formulation is similar to the economic MPC. As the prediction horizon for this controller is much smaller than the batch time, the problem is reformulated to ensure feasibility of end of batch constraints. The two approaches are compared in terms of the effectiveness, overall performance index, robustness (to disturbance and plant-model mismatch) and associated implementation challenges (computational time). A simulation case study with the separation of a ternary mixture consisting of benzene, toluene and o-xylene is used to illustrate the controller design and performance.

References:

[1] Davidyan, A. G., Kiva, V. N., Meski, G. A., & Morari, M. (1994). Batch distillation in a column with a middle vessel. Chemical Engineering Science, 49(18), 3033-3051.

[2] Beniwal, S., & Jogwar, S. S. (2024). Batch distillation performance improvement through vessel holdup redistribution—Insights from two case studies. Digital Chemical Engineering, 13, 100187.



Recurrent deep learning models for multi-step ahead prediction: comparison and evaluation for real Electrical Submersible Pump (ESP) system.

Vinicius Viena Santana1, Carine de Menezes Rebello1, Erbet Almeida Costa1, Odilon Santana Luiz Abreu2, Galdir Reges2, Téofilo Paiva Guimarães Mendes2, Leizer Schnitman2, Marcos Pellegrini Ribeiro3, Márcio Fontana2, Idelfonso Nogueira1

1Norwegian University of Science and Technology, Norway; 2Federal University of Bahia, Brazil; 3CENPES, Petrobras R&D Center, Brazil

Predicting future states from historical data is crucial for automatic control and dynamic optimization in engineering. Recent advances in deep learning have provided new opportunities to improve prediction accuracy across various engineering disciplines, particularly using Artificial Neural Networks (ANNs). Recurrent Neural Networks (RNNs), particularly, are well-suited for time series prediction due to their ability to model dynamic systems through recurrent updates1.

Despite RNNs' high predictive capacity, their potential can be underutilized if the model training does not consider the intended future usage scenario2,3. In applications like Model Predictive Control (MPC), the model must evolve over time, relying only on its own predictions rather than ground truth data. Training a model to predict only one step ahead may result in poor performance when applied to multi-step predictions, as errors compound in the auto-regressive (or generative) mode.

This study focuses on identifying optimal strategies for training deep recurrent neural networks to predict critical operational time series data from a real Electric Submersible Pump (ESP) system. We evaluate the performance of RNNs in multi-step-ahead predictions under two conditions: (1) when trained for single-step predictions and recursively applied to multi-step forecasting, and (2) using a novel training approach explicitly designed for multi-step-ahead predictions. Our findings reveal that the same model architecture can exhibit markedly different performance in multi-step-ahead forecasting, emphasizing the importance of aligning the training process with the model's intended real-time application to ensure reliable predictions.

[1] Himmelblau, D.M. Applications of artificial neural networks in chemical engineering. Korean J. Chem. Eng. 17, 373–392 (2000). https://doi.org/10.1007/BF02706848

[2] Marrocos, P.H., Iwakiri, I.G.I., Martins, M.A.F., Rodrigues, A.E., Loureiro, J.M., Ribeiro, A.M., & Nogueira, I.B.R. (2022). A long short-term memory based Quasi-Virtual Analyzer for dynamic real-time soft sensing of a Simulated Moving Bed unit. Applied Soft Computing, 116, 108318. https://doi.org/10.1016/j.asoc.2021.108318

[3] Nogueira, I.B.R., Ribeiro, A.M., Requião, R., Pontes, K.V., Koivisto, H., Rodrigues, A.E., & Loureiro, J.M. (2018). A quasi-virtual online analyser based on artificial neural networks and offline measurements to predict purities of raffinate/extract in simulated moving bed processes. Applied Soft Computing, 67, 29-47. https://doi.org/10.1016/j.asoc.2018.03.001



Simulation and optimisation of vacuum (pressure) swing adsorption with simultaneous consideration of real vacuum pump data and bed fluidisation

Yangyanbing Liao, Andrew Wright, Jie Li

Centre for Process Integration, Department of Chemical Engineering, School of Engineering, The University of Manchester, United Kingdom

Pressure swing adsorption (PSA) is an essential technology for gas separation and purification. A PSA process where the highest pressure is above the atmospheric pressure and the lowest pressure is at a vacuum level is referred to as vacuum pressure swing adsorption (VPSA). In contract, vacuum swing adsorption (VSA) refers to a PSA process with the highest pressure equal to or slightly above the atmospheric pressure and the lowest pressure below atmospheric pressure.

Most computational studies concerning simulation of V(P)SA processes have assumed a constant vacuum pump efficiency ranging from 60% to 100%. Nevertheless, Krishnamurthy et al. [3] highlighted 72% is a typical efficiency value for compressors, but not representative for vacuum pumps. They reported a low efficiency value of 30% estimated based on their pilot-plant data. As a result, the energy consumption of the vacuum pump could have been underestimated by at least a factor of two in many computational studies.

In addition to assuming a constant vacuum pump efficiency, efficiency correlations have been proposed to more accurately evaluate the vacuum pump performance [4-5]. However, these correlations fail to conform to the trend suggested by the data points at higher pressures or to accurately represent the vacuum pump performance.

The adsorption bed fluidisation is another key factor in designing the PSA process. This is because bed fluidisation incurs rapid adsorbent attrition and eventually results in a substantial decrease in the separation performance [6]. However, the impacts of fluidisation on PSA optimisation have not been comprehensively addressed. More importantly, existing studies have not simultaneously incorporated real vacuum pump performance data and bed fluidisation limits into PSA optimisation.

To address the above research gaps, in this work we develop accurate prediction models for the pumping speed and power of the vacuum pump based on real performance curves using the data-driven modelling approach [7-8]. We then develop a new, comprehensive V(P)SA model that allows for an accurate evaluation of the V(P)SA process performance without relying on estimated vacuum pump energy efficiency or pressure/flow rate BCs at the vacuum pump end of the adsorption bed. A new optimisation problem that simultaneously incorporates the proposed V(P)SA model and the bed fluidisation constraints is thus constructed.

The computational results demonstrate that vacuum pump efficiency falls within 20%-40%. Using an estimated vacuum pump efficiency, the optimal cost is underestimated by at least 42% compared to that obtained using the proposed performance models. When the fluidisation constraints are incorporated, a low feed velocity and an exceptionally long cycle time are essential for maintaining a small pressure drop across the bed to prevent fluidisation. The optimal total cost is increased by at least 16% than cases where bed fluidisation constraints are not incorporated. Hence, it is important to incorporate vacuum pump performance prediction models developed using real data and bed fluidisation constraints to accurately evaluate the PSA performance.

References

1. Compt. Aided Chem. Eng.2012:1217-21.

2. Energy2017;137:495-509.

3. AIChE J.2014;60(5):1830-42.

4. Int J Greenh Gas Con.2020;93:102902.

5. Ind. Eng. Chem. Res.2019;59(2):856-73.

6. Adsorption2014;20:757-68.

7. AIChE J.2016;62(9):3020-40.

8. Appl. Energy2022;305:117751.



Sociotechnical Transition: An Exploratory Study on the Social Appropriability of Users of Smart Meters in Wallonia.

Elisa Boissézon

Université de Mons, Belgium

Optimal and autonomous daily use of new technologies isn’t a reality for everyone. In a societal context driven by sociotechnical transitions (Markard et al., 2012), many people lack access to digital equipment, do not possess the required digital skills for their use, and, consequently, are unable to participate digitally in social life via e-services. This reality is called digital inequalities (Agence du numérique, 2023) and is even more crucial to consider in the context of the increasing digitalization of society, in all areas, including energy. Indeed, according to the European Union directives, member states are required to develop various means of action, including digital, which are essential to achieving the three strategic axes envisioned by the European energy transition scenario, namely: investment in renewable energies, energy efficiency, and energy sobriety (Dufournet & Marignac, 2018).

In this specific instance, our research focuses on the question of social appropriation (Zélem, 2018) of new technologies in the context of the deployment of smart meters in Wallonia, and the use of associated digital tools by the publics. These individuals, with their unique socio-economic and sociodemographic profiles, are not equally equipped to utilize all the functionalities offered by this new digital system for managing energy consumption (Agence du Numérique, 2023; Van Dijk, 2017; Valenduc, 2013). This exploratory and phenomenological study aims, firstly, to investigate the experiences of the publics concerning the support received during the installation of the new smart metering system and to identify the barriers to the social appropriation of new technologies. Secondly, the field surveys aim to determine to what extent individual participatory forms of support (Benoît-Moreau et al., 2013; Cadenat et al., 2013), developed through the lens of active pedagogies such as experiential learning (Brotcorne & Valenduc, 2008, 2009), and collective forms (Bernaud et al., 2015; Turcotte & Lindsay, 2008) can promote the inclusion of digitally vulnerable users. The central role of field professionals as interfaces (Cihuelo & Jobert, 2015) is also highlighted within the service relationship (Gadrey, 2003) that connects, on one hand, the end consumers and, on the other hand, the organization responsible for deploying the smart meters. Our qualitative investigations were conducted with four types of samples, through semi-structured interviews, considering several determining factors regarding the engagement in the use of new technologies, from both individual and collective perspectives. Broadly speaking, our results indicate that while the standardized support protocol applied by field professionals during the installation of smart meter is sufficient for digitally proficient users, the situation is more nuanced for vulnerable populations who have specific needs requiring close support. In this context, collective participatory support in workshops in the form of focus groups seems to have further promoted the digital inclusion of participants.



Optimizing Methane Conversion in a Flow Reactor System Using Bayesian Optimization and Fisher Information Matrix Driven Experimental Design Approaches: A Comparative Study

Michael Aku, Solomon Gajere Bawa, Arun Pankajakshan, Ye Seol Lee, Federico Galvanin

University College London, United Kingdom

Reaction processes are complex systems requiring optimization techniques to achieve optimal performance in terms of key performance indicators (KPIs) such as yield, conversion, and selectivity [1]. Optimisation efforts often relies on the accurate modelling of reaction kinetics, thermodynamics and transport phenomena to guide experimental design and improve reactor performance. Bayesian Optimization (BO) and Fisher Information Matrix-driven (FIMD) techniques are two key approaches used in the optimization of reaction systems [2].
BO helps in identifying conditions efficiently by starting from an exploratory means of the design space, while FIMD approaches have been recently proposed to maximise the information gained from experiments and progressively improve parameter estimation [3] by focusing more on exploitation of the decision space to reduce the uncertainty in kinetic model parameters [4]. Both techniques have been used largely within the scientific and industrial domains, but they exhibit a fundamental difference in their approach on the balance between exploration (gaining new knowledge) and exploitation (using current knowledge to optimize outcomes) during model calibration.

This study presents a comparative assessment of BO and FIMD methods for optimal experimental design, focusing on the complete oxidation of methane in an automated flow reactor system [5]. The performance of both methods is evaluated in terms of methane conversion optimization, experimental efficiency (i.e., the number of required runs to achieve the optimum), and information. Our preliminary findings suggest that while BO readily converges to a high methane conversion, FIMD can be a valid alternative to reduce the number of required experiments, offering more insights into the sensitivities of each parameter and process dynamics. The comparative analysis paves way towards developing explainable or physics-informed data-driven models to map the relationship between predicted experimental information and KPI. The comparison also highlights trade-offs between convergence speed and robustness in experimental design, which are key aspects to consider for a comprehensive evaluation of both approaches in online reaction process optimization.

References

[1] Taylor, C. J., Pomberger, A., Felton, K. C., Grainger, R., Barecka, M., Chamberlain, T. W., & Lapkin, A. A. (2023). A brief introduction to chemical reaction optimization. Chemical Reviews, 123(6), 3089-3126.

[2] Quirino, P. P. S., Amaral, A. F., Manenti, F., & Pontes, K. V. (2022). Mapping and optimization of an industrial steam methane reformer by the design of experiments (DOE). Chemical Engineering Research and Design, 184, 349-365.

[3] Friso, A., & Galvanin, F. (2024). An optimization-free Fisher information driven approach for online design of experiments. Computers & Chemical Engineering, 187, 108724.

[4] Green, P. L., & Worden, K. (2015). Bayesian and Markov chain Monte Carlo methods for identifying nonlinear systems in the presence of uncertainty. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 373(2051), 20140405.

[5] Pankajakshan, A., Bawa, S. G., Gavriilidis, A., & Galvanin, F. (2023). Autonomous kinetic model identification using optimal experimental design and retrospective data analysis: methane complete oxidation as a case study. Reaction Chemistry & Engineering, 8(12), 3000-3017.



OPTIMAL CONTROL OF PSA UNITS BASED ON EXTREMUM SEEKING

Beatriz Cambão da Silva1,2, Ana Mafalda Ribeiro1,2, Diogo Filipe Rodrigues1,2, Alexandre Filipe Porfírio Ferreira1,2, Idelfonso Bessa Reis Nogueira3

1Laboratory of Separation and Reaction Engineering−Laboratory of Catalysis and Materials (LSRE LCM), Department of Chemical Engineering, University of Porto, Porto, 4200-465, Portugal; 2ALiCE−Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

The application of RTO to dynamic operations is challenging due to the complexity of the nonlinear problems involved, making it difficult to achieve robust solutions [1]. Regarding cyclic adsorption processes, particularly Pressure Swing Adsorption (PSA) and Temperature Swing Adsorption (TSA), the control of the process in real-time is essential to maintain or increase productivity.

The literature on Real-time Optimization in PSA units relies on Model Predictive Control (MPC) and Economic Model Predictive Control (EMPC) [2] . These options rely heavily on the accurate model representation of the industrial plant, requiring a high computational effort and time to ensure optimal control [3]. Given the importance of PSA and TSA systems on multiple separation operations, establishing alternatives for control and optimization in real-time is in order. With that in mind, this work aimed to explore alternative model-free real-time optimization techniques that depend on simple control elements, as is the case of Extremum Seeking Control (ESC).

The chosen case study was Syngas Upgrading, which is relevant since it precedes the Fischer‑Tropsch reactions that enable an alternative to fossil fuels. Syngas Upgrading can also provide H2 for ammonia production and diminish CO2 emission. The operation of the PSA unit for syngas upgrading used as the basis for this study was discussed in the work of Regufe et al. [4].

Extremum-seeking control is a method that aims to control the process by moving an objective’s gradient towards zero while estimating that gradient based on persistent perturbations. A High-pass Filter (HF) eliminates the signal’s DC component to get a clearer response to the changes in the system. The input variable 𝑢 is continually disrupted by a sinusoidal wave, which helps assess the evolution of the objective function by keeping the system in a state of constant perturbation. The integration will determine the necessary adjustment in 𝑢 to bring the objective function closer to its optimum. This adjustment is often scaled by a gain 𝐾 to accelerate convergence.

The PSA model was implemented in gPROMS, representing the behaviour of the industrial plant, with communication with MATLAB and Simulink, where the ESC was implemented.

Extremum Seeking Control successfully optimized the CO2 productivity in PSA units for syngas upgrading/H2 purification. This shows that ESC can be a valuable tool in optimizing and controlling PSA processes and does not require the unit to reach a Cyclic Steady State to adjust the operation.

[1] S. Kameswaran and L. T. Biegler, “Simultaneous dynamic optimization strategies: Recent advances and challenges,” Computers & Chemical Engineering, vol. 30, no. 10, pp. 1560–1575, 2006, doi: 10.1016/j.compchemeng.2006.05.034.

[2] H. Khajuria and E. N. Pistikopoulos, “Optimization and Control of Pressure Swing Adsorption Processes Under Uncertainty,” AIChE Journal, vol. 59, no. 1, pp. 120–131, Jan. 2013, doi: 10.1002/aic.13783.

[3] S. Skogestad, “Advanced control using decomposition and simple elements,” Annual Reviews in Control, vol. 56, p. 100903, 2023, doi: 10.1016/j.arcontrol.2023.100903.

[4] M. J. Regufe et al., “Syngas Purification by Porous Amino-Functionalized Titanium Terephthalate MIL-125,” Energy & Fuels, vol. 29, no. 7, pp. 4654–4664, 2015, doi: 10.1021/acs.energyfuels.5b00975.



Enhancing Higher Education Capacity for Sustainable Data Driven Food Systems in Indonesia – FIND4S

Monika Polanska1, Yoga Pratama2, Setya Budi Abduh2, Ahmad Ni'matullah Al-Baarri2, Jan Van Impe1

1BioTeC+, Chemical & Biochemical Process Technology & Control, KU Leuven, Belgium; 2Department of Food Technology, Diponegoro University, Indonesia

The Capacity Building Project entitled “Enhancing Higher Education Capacity for Sustainable Data Driven Food Systems in Indonesia” (FIND4S, “FIND force”) aims to boost the institutional and administrative resources of seven Indonesian higher education institutions (HEIs) in Central Java.

The EU overarching priorities addressed through the FIND4S project include Green Deal and Digital Transformation, through developing knowledge, competences, skills and values. The modernized, competitive and innovative curricula will stimulate green jobs and pave the way to sustainable food systems including environmental impact to be taken into account. The essential elements of risk assessment, predictive modelling and computational optimization are to be brought together with both sustainability principles of food production and food processing as well as energy and food chain concepts (Life Cycle Assessment) within one coherent structure. The project will offer a better understanding of ecological and food systems dynamics and offer strategies in terms of regenerating natural systems by usage of big data and providing predictive tools for the food industry. The predictive modelling tools can be applied to evaluate the effects of climate change on food safety with regard to managing this new threat for all stakeholders. Raising the quality of education through digital technologies will enable the learners to acquire essential competences and sector-specific digital skills. The inclusion of data management to address sustainability challenges will reinforce scientific, technical and innovation capacities of HEIs and foster links between academia, research and industry.

Initially, the FIND4S project will modernize Bachelor’s degree curricula to include food systems and technology-oriented programs at partner universities in Indonesia. This modernization will aim to meet the desired accreditation standards and better prepare graduates for postgraduate studies. Additionally, in the central hub university, the project will develop a new and innovative Master’s degree program in sustainable food systems that integrates sustainability and environmental awareness into graduate education. This program will align with labor market demands and address the challenges, agriculture and food systems are facing, providing insights into potential threats and opportunities for knowledge transfer to Indonesia through education and research.

The recognition and implementation of novel and innovative programs will be tackled via significant improvement of food science education by designing new curricula and upgrading existing ones, training academic staff, creating a research center and equipping laboratories, as well as expanding the network of collaboration with European Higher Education Institutions. The project will utilize big data, quantitative modeling, and engineering tools to engage all stakeholders, including industry partners. The comprehensive MSc program will meet the growing demand for knowledge, experience, and standards in Indonesia, contributing to a greener and more sustainable economy and society. Ultimately, this initiative will support the necessary transformation towards socially, environmentally, and economically sustainable food systems.



Optimization of Specific Heat Transfer Area for Multiple Effects Desalination (MED) Process

Salih Alsadaie1, Sana I Abukanisha1, Iqbal M Mujtaba3, Amhamed A Omar2

1Sirte University, Libya; 2Sirte Oil Company, National Oil Corporation, Libya; 3University of Bradford, United Kingdom

The world population is expected to increase massively in coming decades putting more stress on the desalination industries to cope with the increasing demand for fresh water. However, with the increasing cost of living, freshwater production processes face the challenge of producing freshwater at higher quality and lower cost. The most known techniques for water desalination are thermal based such as Multistage Flash desalination (MSF) and Multiple Effect desalination (MED) and membrane based such as Reverse Osmosis (RO). Although the installed capacity of (RO) remarkably surpasses the MSF and MED, the MED process is more preferred option for new construction plants in different locations around the world where waste heat is available. However, the MED desalination technology is also required to cut off more costs by optimizing their operating and design parameters.

There are several studies in the literature that focus on optimizing the MED process. Most of these studies focus on increasing production rate or minimizing energy consumption by optimizing operating conditions, using of more efficient control systems, integration with power plants and hybrid with other desalination techniques. However, none of the available studies focused on optimum design configuration such as heat transfer area and number of effects.

In this paper, a mathematical model describing the MED process is developed and solved using gPROMs software. For a fixed production rate, the heat transfer area is optimized by variation of seawater temperature and flowrate steam temperature and flowrate, and the number of effects. The design and operating data are taken from an almost new existing small MED process plant with two large effects and two small effects.

Keywords: MED desalination, gPROMs, optimization, heat transfer area, multiple effects.

References

  1. Mayor, B., 2019. Growth patterns in mature desalination technologies and analogies with the energy field. Desalination, 457, pp.75-84.
  2. Prajapati, M., Shah, M. and Soni, B., 2022. A comprehensive review of the geothermal integrated multi-effect distillation (MED) desalination and its advancements. Groundwater for Sustainable Development, 19, p.100808.


Companies’ operation and trading strategies under the triple trading of electricity, carbon quota and commodities: A game theory optimization modelling

Chenxi Li1, Nilay Shah2, Zheng Li1, Pei Liu1

1State Key Lab of Power System Operation and Control, Department of Energy and Power Engineering, Tsinghua-BP Clean Energy Centre, Tsinghua University, Beijing, 100084, China; 2Department of Chemical Engineering, Imperial College London, SW7 2AZ, United Kingdom

Trading has been recognized as an effective measure for decarbonization, especially with the recent global focus on carbon reduction targets. Due to the high correlation in terms of participants and traded goods, carbon and electricity trading are highly coupled, leading operational strategies of companies involved in the couple transaction unclear. Therefore, the research on the coupled trading is essential, as it helps companies identify optimal strategies and enables policymakers to detect potential policy loopholes. This study presents a novel game theory optimization model involving both power generation companies (GenCos) and factories. Aiming to achieve a Nash equilibrium that maximizes each company’s benefits, this model explores optimal operation strategies for both power generation and consumption companies under electricity-carbon joint trading. It fully captures the operational characteristics of power generation units and the technical energy consumption of electricity-using enterprises to describe the relationship between renewable energy, fossil fuels, electricity, and carbon emissions detailedly. Electricity and carbon prices in the transaction are determined through negotiation between buyers and sellers. Considering the relationship between production volume and price of the same product, the case actually encompasses three trading systems: electricity, carbon, and commodities. The model’s nonlinearity, caused by the Nash equilibrium and the product of price and output, is managed using piecewise linearization and discretization, transforming the problem into a mixed-integer linear problem. Using this triple trading model, this study quantitatively explores three issues based on a virtual case involving three GenCos and four factories: the enterprises’ operational strategies under varying emission reduction requirements, the pros and cons of cap and benchmark carbon quota allocation mechanisms, and the impact of integrating zero-emission enterprises into carbon trading. Results indicate that GenCos tend to act as sellers of both electricity and carbon quotas. Meanwhile, since consumers may cut production rather than implementing low-carbon technologies to lower emissions, driving up product prices to maintain profits, high electricity and carbon prices become unsustainable for GenCos due to reduced electricity demand. Moreover, while benchmark mechanisms may incentivize production, they can also lower overall system profits, which is undesirable for policymakers. Lastly, under strict carbon reduction targets, zero-emission companies may transform the carbon market into a seller's market by purchasing carbon to raise carbon prices, thereby reducing electricity prices and lowering their own operating costs.



Solvent and emission source dependent amine-based CO2 capture costs estimation methodology for systemic level analysis

Yi Zhao1, Aron Beck1, Hayato Hagi2, Bruno Delahaye2, François Maréchal1

1Ecole Polytechnique Fédérale de Lausanne, Switzerland; 2TotalEnergies OneTech, France

Amine-based carbon capture effectively reduces industrial emissions but faces challenges due to high investment costs and the energy penalty associated with solvent regeneration. Existing cost estimation either rely on complex and costly simulation processes or provide overly general results, limiting their applicability for systemic analysis. This study presents a shortcut approach to estimating amine-based carbon capture costs, considering varying solvents and emission sources in terms of flow rates and CO2 concentrations. The results show that scaling effects significantly impact smaller plants, with costs dropping from 200–500 $/t-CO2 to 50–100 $/t-CO2 as capacity increases from 0.1 to 100 t-CO2/h, with Monoethanolamine (MEA) as the solvent. For larger plants, heat utility costs dominate, representing around 80% of the total costs, assuming a natural gas price of 35 $/MWh (10.2 $/MMBTU). Furthermore, MEA-based plants can be up to 25% more expensive than those with alternative solvents. In short, this study provides a practical method for initial amine-based carbon capture cost estimation, enabling a systemic assessment of its technoeconomic potential and facilitating its comparison with other CO2 abatement technologies.



Energy Planning Toward Absolute Environmental Sustainability: Key Decisions and Actionable Insights Through Interpretable Machine Learning

Nicolas Ghuys1, Diederik Coppitters1, Anne van den Oever2, Maarten Messagie2, Francesco Contino1, Hervé Jeanmart1

1Université catholique de Louvain, Belgium; 2Vrije Universiteit Brussel, Belgium

Energy planning models traditionally support the energy transition by focusing on cost-optimized solutions that limit greenhouse gas emissions. However, this narrow focus risks burden-shifting, where reducing emissions increases other environmental pressures, such as freshwater use, solving one problem while creating others. Therefore, we integrated Planetary Boundary-based Life Cycle Assessment (PB-LCA) into energy planning to identify solutions that respect absolute environmental sustainability limits. However, integrating PB-LCA into energy planning introduces challenges, such as adopting distributive justice principles, interpreting trade-offs across PB indicator impacts, and managing subjective weighting in the objective function. To address these, we employed weight screening and interpretable machine learning to extract key decisions and actionable insights from the numerous quantitative solutions generated. Preliminary results for a single weighting scenario show that the transition scenario exceeds several PB thresholds, particularly for ecosystem quality and mineral resource depletion, underscoring the need for a balanced weighting scheme. Next, we will apply screening and machine learning to pinpoint key decisions and provide actionable insights for achieving absolute environmental sustainability.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ESCAPE | 35
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany