Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
T6: Digitalization and AI - Session 6
Time:
Wednesday, 09/July/2025:
2:30pm - 3:50pm

Chair: Marco Seabra Reis
Co-chair: Leonhard Urbas
Location: Zone 3 - Room D049

KU Leuven Ghent Technology Campus Gebroeders De Smetstraat 1, 9000 Gent

Show help for 'Increase or decrease the abstract text size'
Presentations
2:30pm - 2:50pm

Application of Artificial Intelligence in process simulation tool

Nikhil Rajeev1, Suresh Kumar Jayaraman1, Prajnan Das2, Srividya Varada1

1AVEVA Group Ltd, United States of America; 2Cognizant Technology Solutions U.S. Corporation, United States of America

Process engineers in the Chemical and Oil & Gas industries extensively use process simulation for the design, development, analysis, and optimization of complex systems. This study investigates the integration of Artificial Intelligence (AI) with AVEVA Process Simulation (APS), a next-generation commercial simulation tool. We propose a framework for a custom chatbot application designed to assist engineers in developing and troubleshooting simulations. This chatbot utilizes a custom-trained model to transform engineer prompts into standardized queries, facilitating access to essential information from APS. The chatbot extracts critical data regarding solvers and thermodynamic models directly from APS to help engineers develop and troubleshoot process simulations. Furthermore, we compare the performance of our custom model against OpenAI technology. Our findings indicate that this integration significantly enhances the usability of process simulation tools, promoting more innovative and cost-effective engineering solutions.



2:50pm - 3:10pm

Reinforcement Learning-Based Optimization of Shell and Tube Heat Exchangers

Luana de Pinho Queiroz1,2,3, Olve Ringstad Bruaset3, Ana Mafalda Ribeiro1,2, Idelfonso Bessa dos Reis Nogueira3

1LSRE-LCM – Laboratory of Separation and Reaction Engineering - Laboratory of Catalysis and Materials, Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-465, Portugal; 2ALiCE – Associate Laboratory in Chemical Engineering, Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-465, Portugal; 3Chemical Engineering Department, Norwegian University of Science and Technology, Sem Sælandsvei 4, Kjemiblokk 5, Trondheim, 793101, Norway

Heat exchangers play a crucial role in a wide range of industries, facilitating heat transfer between fluids at different temperatures, significantly impacting operational efficiency and energy consumption1. Their application is vital in industrial processes where optimizing heat transfer can substantially reduce operational costs and energy demand2. However, the design of heat exchangers presents several challenges, particularly in areas such as rating, sizing, and overall efficiency3. Due to the inherent complexities involved, traditional design approaches often rely on iterative, manual adjustments that may not guarantee optimal results. To address these limitations, recent research has begun exploring the integration of Scientific Machine Learning (SciML), which combines scientific models with machine learning techniques to streamline and enhance the optimization process4. Although the application of SciML in heat exchanger design is still emerging, early studies show its potential to offer valuable insights into heat transfer optimization.

This research introduces a model for optimizing the design of shell and tube heat exchangers using Q-learning, a reinforcement learning technique. The primary aim is to bridge the gap between heat exchanger optimization and the growing field of SciML. The model was developed by training an agent within a simulated environment, where it iteratively adjusted design configurations to maximize a reward function based on heat transfer effectiveness and pressure drop. A comprehensive database informed the simulation of heat exchanger specifications, parameters, and fundamental heat transfer principles, such as the ɛ-NTU method. The reward function was designed to balance maximizing effectiveness and minimizing pressure drop, ensuring an optimal trade-off between these competing performance factors.

During training, the most straightforward design configurations consistently achieved the highest reward across most heat exchanger specifications. While more complex configurations demonstrated superior heat transfer efficiency, the lower pressure drop associated with the simpler designs ultimately proved decisive in performance evaluations. This outcome highlights the potential for machine learning techniques like Q-learning to identify efficient design solutions that traditional methods may otherwise overlook. However, this work represents an early exploration of the approach, and further developments are needed to create a more versatile and practical tool. Future improvements should focus on increasing the model’s adaptability by incorporating a broader range of fluid types, utilizing more detailed heat transfer equations, expanding the set of design configurations, and refining the reward function to account for additional performance parameters.

1 Balaji, C., Srinivasan, B., & Gedupudi, S. (2020). Heat transfer engineering: fundamentals and techniques. Academic Press.

2 Caputo, A. C., Pelagagge, P. M., & Salini, P. (2008). Heat exchanger design based on economic optimisation. Applied thermal engineering, 28(10), 1151-1159.

3 Saxena, R., & Yadav, S. (2013). Designing steps for a heat exchanger. International Journal of Engineering Research & Technology, 2(9), 943-959.

4 Iwema, J. (2023, January 16). Scientific machine learning. Wageningen University & Research. https://sciml.wur.nl/reviews/sciml/sciml.html



3:10pm - 3:30pm

Structural Optimization of Translucent Monolith Reactors through Multi-objective Bayesian Optimization

onur can boy1, ulderico di caprio1, Mumin Enis Leblebici1, Idelfonso Nogueira2

1KU Leuven, Department of Chemical Engineering, Centre for Industrial Process Technology; 2Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU)

Photochemical reactions are a promising approach to thermal and chemical activation methods, improving the selectivity of the process and improving its energy efficiency. In photochemical systems, monoliths accommodating repetitive structures are shown to be a successful approach to miniaturize and intensify chemical processes ensuring high mixy efficiency and surface area to volume ratio while being easily scalable. Monoliths do not have the drawbacks of lower photochemical space-time yield (PSTY) resulting from the mismatch between the size of the light source and the microreactor through multiple stacked channels, creating an enhanced light scattering, positioning them as a better alternative to microreactors [1]. However, they have many critical design parameters, such as the number, the shape, and the shape of channels to be stacked that should be considered with the light source characteristics together to maximize light usage and reactor efficiency. Such a multi-parameter optimization problem is nowadays executed manually by human designers; however, optimization algorithms can represent an alternative to this approach. This work proposes a methodology to automatically design translucent monolith for photochemical reactions, leveraging Multiphysics simulations and Bayesian optimization (BO).

As a demonstration case, the geometry used by Jacobs et al. is optimized through BO using multi-objective cost criteria, namely maximizing both the PSTY and the space-time yield (STY). The Ray tracing module of COMSOL Multiphysics is used to model and simulate light behavior. The optimization is performed with four tunable parameters: characteristic channel diameter, the number of channels to be stacked vertically, and the channel shape and rotation. Channels vary in square, circle, ellipse, and plus sign shapes. It is observed that there is a competing relationship between STY and PSTY. Keeping other factors constant, reactor volume is the dominating factor for STY maximization. Hence, STY maximization requires smaller volumes. However, more energy is wasted due to a decrease in absorbed power. PSTY has also been considered to prevent this. Maximizing PSTY results in a significant decrease in the outlet concentration. The trade-off between the absorbed energy and the output concentration makes it necessary to adjust the weight of the STY and PSTY based on the desired output. As a result, PSTY and STY are simultaneously optimized to ensure they meet the minimum conditions in the benchmark work. By selecting the square-shaped channels and applying a 34° rotation angle, an STY development of 25% and a PSTY development of 20% was achieved, which means the same amount of light was absorbed in a smaller reactor volume. Also, plus sign channels with a 15° rotation angle improved STY and PSTY by 15%.

This study proposes a methodology to increase the efficiency of already optimized photochemical reactor designs by achieving better light scattering using BO. Results show that improving one of STY and PSTY by up to 20% is possible. The competing relation between STY and PSTY is also observed, provided the materials and light characteristics were unchanged.

1-Jacobs, M. et al. (2022) ‘Scaling up multiphase photochemical reactions using translucent monoliths’, Chemical Engineering and Processing - Process Intensification, 181, doi:10.1016/j.cep.2022.109138.



3:30pm - 3:50pm

A novel approach to gradient evaluation and efficient deep learning: A hybrid method

Bogdan Dorneanu, Vasileios K. Mappas, Harvey Arellano-Garcia

Brandenburg University of Technology Cottbus-Senftenberg, Germany

Machine learning approaches, and deep learning particularly, continue to face significant challenges in the efficient training of large-scale models and accurate gradient evaluations (Ahmed et al., 2023). These challenges are interconnected, as efficient training often relies on precise and computationally feasible gradient calculations. This work introduces a suite of novel methodologies that enhance both the training process of deep learning networks (DLNs) and improve gradient evaluation in complex systems.

This contribution presents an innovative approach to DLN training by adapting the block coordinate descent (BCD) method (Yu, 2023), which optimizes individual layers sequentially. This method is integrated with a traditional batch-based training method, creating a hybrid approach that leverages the strengths of both methodologies. To further enhance the optimization process, the study explores the use of the Iterated Control Random Search (ICRS) for initial parameter selection and investigates the application of quasi-Newton methods like L-BFGS with restricted iterations.

Complementing these advancements in DLN training, the study also tackles the challenge of gradient evaluation in large-scale systems, a crucial step for efficient training and optimization (Lwakatare et al., 2020). It introduces a generalized modular strategy based on a novel adjoint-based method, offering flexible and robust solution for gradient evaluation of complex hierarchical multiscale systems. This approach is particularly valuable for machine learning applications dealing with high-dimensional data or complex model architectures, as it allows for more efficient and accurate gradient computations during the training process.

By addressing both the training efficiency of DLNs and the gradient evaluation in large-scale systems, this research provides a comprehensive set of tools to address some of the most pressing challenges in contemporary machine learning. The proposed framework offers promising avenues for improving scalability, efficiency, and effectiveness of machine learning algorithms, particularly in handling complex high-dimensional problems increasingly common in real-world applications.

Utilizing relevant examples from process systems engineering, it is demonstrated how the integration of these methods can directly contribute to more efficient and effective training of large-scale systems.

References

Ahmed, S.A. et al. 2023. Deep learning modelling techniques : current progress, applications, advantages, and challenges, Artificial Intelligence Review 56, 13521-13617

Li, B. et al. 2016. ICRS-Filter: A randomized direct search algorithm for constrained nonconvex optimization problems, Chemical Engineering Research and Design 106, 178-190

Lwakatare, L.E. et al. 2020. Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions, Information and Software Technology 127, 106368

Yu, Z. 2023. Block coordinate type methods for optimization and learning, Analysis and Applications 21, 777-817



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ESCAPE | 35
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany