2:30pm - 2:50pmApplication of pqEDMD to modelling and control of bioprocesses
Camilo Garcia-Tenorio, Guilherme Araujo Pimentel, Laurent Dewasme, Alain Vande Wouwer
University of Mons, Belgium
Extended Dynamic Mode Decomposition (EDMD) and its variant, the pqEDMD, which uses a p-q-quasi norm reduction of the polynomial basis functions, are appealing tools to derive linear operators approximating the dynamic behavior of nonlinear systems. This study highlights how this methodology can be applied to data-driven modeling and control of bioprocesses by discussing the selection of several ingredients of the method, such as polynomial basis, order, data sampling, and preparation for training and testing, and ultimately, the exploitation of the model in linear model predictive control.
2:50pm - 3:10pmIncorporating Process Knowledge into Latent-Variable Models: An Application to Root Cause Analysis in Bioprocesses
Tobias Overgaard1,2, Maria-Ona Bertran3, John Bagterp Jørgensen1, Bo Friis Nielsen1
1Techical University of Denmark, Department of Applied Mathematics and Computer Science, Matematiktorvet, Building 303B, DK-2800 Kgs. Lyngby, Denmark; 2Novo Nordisk A/S, PS API Manufacturing, Science & Technology, Smørmosevej 17‐19, DK‐2880 Bagsværd, Denmark; 3Novo Nordisk A/S, PS API Expansions, Hallas Alle 1, DK-4400 Kalundborg, Denmark
Troubleshooting performance variations in batch bioprocesses at a plant-wide level involves identifying the process step responsible for these variations and analyzing the root cause. While root cause analysis is well-documented, pinpointing the specific process step responsible for variations is less explored due to complexities like serial-parallel unit arrangements [1]. In commercial production, measured process variables may not reveal the root cause, as tightly controlled variables show minimal variation, thus hiding critical information [2]. Therefore, incorporating developmental data from smaller-scale experiments is crucial for identifying the cause of variation.
We propose a structured methodology for troubleshooting plant-wide batch bioprocesses in multi-source data environments using latent-variable techniques. Initially, we select a process step where unexplained performance variations manifest, termed the "step of manifestation“. Next, a sequential multi-block partial least squares (SMB-PLS) model spanning the process flow diagram until the step of manifestation is built [3]. This model aims to isolate a potential step where the variation originates, termed the "step of origin“. The SMB-PLS model captures connectivity information from a multi-step process by linking data blocks from each step sequentially and uses orthogonalization to separate correlated information between blocks, retaining unique information for each block [4]. To handle parallel units, data blocks are arranged using low-level fusion, where data blocks are concatenated and analyzed as a single block.
Once the step of origin is isolated using the SMB-PLS model, an in-depth investigation of the step is performed by incorporating knowledge from small-scale experiments. The aim of this data combination is to unveil internal dynamics and variable interactions that cause the performance variation. The joint-Y PLS (JY-PLS) model is used to incorporate knowledge from different scales, capturing the common variable structure across multiple scales [5].
We apply this multi-step, multi-scale methodology to troubleshoot a commercial batch bioprocess producing an active pharmaceutical ingredient. We find that downstream productivity is limited by unexplained variability during cell culture production. To gain further insights, bioreactor data from small-scale developmental studies are paired with commercial-scale data. The output data include quality attributes related to the final product concentration profile along with various metabolites, and the input data include process variables like temperature, pH, pO2, dilution rate, and raw material components such as seed inocula and glucose.
Given the data-driven nature of the methodology, validation of the process improvements is crucial. Identified effects and hypotheses are discussed between process specialists and data scientists, which has been key to obtaining valuable insights. Furthermore, the model's adherence to the flowsheet design and system scale enhances transparency, leading to effective collaboration between process experts and data scientists. In collaboration, various process variable interactions that impact cell culture performance are identified.
References
1. F. Zuecco, et al., Processes 9 1074 (2021). 2. T. Kourti, Crit. Rev. Anal. Chem. 36 257 (2006). 3. J. Lauzon-Gauthier, et al., Chemom. Intell. Lab. Syst. 101 72 (2018). 4. Q. Zhu, et al., Chemom. Intell. Lab. Syst. 252 105192 (2024). 5. S. García-Muñoz, et al., Chemom. Intell. Lab. Syst. 79 101 (2005).
3:10pm - 3:30pmFuture Forecasting of Dissolved Oxygen Concentration in Wastewater Treatment Plants using Machine Learning Techniques
Sena Kurban1, Aslı Yasmal1, Ocan Şahin1, Aycan Sapmaz1, Mustafa Oktay Samur1, Gizem Kuşoğlu Kaya1, Gözde Akkoç2, Mahmut Kutay Atlar2
1Turkish Petroleum Refinery, 41780, Körfez, Kocaeli, Turkey; 2Turkish Petroleum Refinery, 71480, Merkez, Kırıkkale, Turkey
Since water is essential to life, its quality can be greatly impacted by pollution, which can have a negative impact on the sustainable and effective use of water resources [1]. Reusing water and preventing pollution of water sources are the goals of the wastewater treatment plant (WWTP) process [2]. In oil refinery, WWTP consists of three steps. In order to remove oil and suspended solids from water, mechanical and physical techniques like gravity separators, dissolved air flotation, filtration, and sedimentation are used in the pretreatment stage. In order to eliminate organic materials and meet the necessary discharge limits, secondary and tertiary treatments come after this stage. By the time tertiary treatment such as biological treatment is completed, more than 99 percent of the toxic and harmful pollutants will have been removed [3].
Predicting the quality of the water is crucial for managing and planning the water environment as well as for preventing and controlling pollution in the water. Dissolved oxygen (DO) is one of them and a crucial water quality indicator [1]. This study focuses on forecasting dissolved oxygen levels in the activated sludge tanks of a biological treatment unit at an oil refinery’s WWTP. Maintaining proper oxygen concentration is crucial for microbial activity in the sludge, as insufficient oxygen can disrupt the biological breakdown of pollutants. The study’s aim is to develop predictive models that identify operational risks early on, allowing for better efficiency in the treatment process and optimizing resources such as chemicals, bacterial cultures, and aeration systems. Another key goal is to provide operators and engineers with early warnings about potential problems in the biological treatment stage, reducing reliance on laboratory tests. This proactive approach ensures that the optimal oxygen levels are maintained for bacteria, leading to increased operational efficiency, reduced costs, and enhanced water quality. Hence, the sustainability of wastewater treatment will be ensured.
Influenced by influent flow rates, contaminant levels, chemical conditions and external factors such as weather and wastewater composition, wastewater treatment processes are complex, non-linear systems that present challenges to accurate modelling [4]. To tackle these issues, thorough data analysis and advanced modeling techniques are essential. In this study, different machine learning models such as Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU) and Long-Short Term Memory (LSTM) were used with a two-year real-time dataset and their performance was evaluated on 8-hour datasets and 23 features. The models were trained, validated, and tested on actual process data. Additionally, principal component analysis (PCA) was employed to clarify data relationship.
Overall, the results show that GRU-based soft sensors are capable of accurately forecasting oxygen concentration with a good performance of R² of 0.7, MSE of 0.01 and MAE of 0.07 in activated sludge ponds used for biological wastewater treatment. Although the model provides effective forecast for system control, enhancements could be realized by incorporating climate data, broadening the feature set, or optimizing hyperparameters to improve accuracy in this complex, nonlinear environment.
3:30pm - 3:50pmExploring Design Space and Optimization of nutrient factors for maximizing lipid production in Metchnikowia pulcherrima with Design of Experiments
Nichakorn Fungprasertkul, James Winterburn, Peter Martin
The University of Manchester, United Kingdom
Unsaturated fatty acids should be a primally source of fat consumption for human (WHO, 2022) because it can decrease a risk of heart diseases by lowering cholesterol level (NHS UK, 2023). However, due to the increase of global population, there are greater demands of food and food crops area (FAO, 2020). Oleaginous yeast is a promising alternative microorganism for commercial lipid production due to the high volumetric productivity, with Metchnikowia pulcherrima an under‐explored oleaginous yeast with potential as a lipid producer (Abeln, 2021). Critical to achieving high productivity lipid production are nutrient factors. A sensitivity test identified carbon and nitrogen sources as important factors in nitrogen limited broth (NLB) for lipid production in M. pulcherrima which are glucose, yeast extract and Ammonium sulphate. Response Surface Methodology (RSM) involving sets of 15 experimental runs of three-factor three-level Box-Behnken Design (BBD) was implemented for exploring the carbon and nitrogen source design. Quadratic surfaces were least-square fitted and used to identify regions of optimal lipid yield. Multiple sets of runs were conducted with the parameter range progressively adapted and repeated until a clear optimum was identified. The highest total lipid production was in the low carbon concentration range (2.27-21.5 g/L) which suggests the more productive process when compared to NLB media (30.4 g/L of carbon). The optimal carbon concentration was 14.8 g/L whereas the dependence on nitrogen was not found to be significant. The yield of optimal point (YP/S) was 2.3 times higher than NLB media after validation because the depletion of glucose at the end of fermentation (72-104 h) contributes the high increase of total lipid production of the optimal point (94.5%) which was higher than NLB media (34.5%) for 60%.
Reference
Abeln, F., Chuck, C.J. The history, state of the art and future prospects for oleaginous yeast research. Microb Cell Fact 20, 221 (2021). https://doi.org/10.1186/s12934-021-01712-1
3:50pm - 4:10pmAdaptable dividing-wall column design for intensified purification of butanediols after fermentation
Tamara Jankovic, Siddhant Sharma, Adrie J.J. Straathof, Anton A. Kiss
Delft University of Technology, The Netherlands
2,3-, 1,4- and 1,3-butanediol (BDOs) are valuable platform chemicals traditionally produced through petrochemical routes. Alternatively, there is growing interest in synthesizing these chemicals through fermentation processes. Given the substantial research efforts dedicated to developing genetically modified microorganisms for BDO production from renewable sources, fermentation has great potential to become a sustainable alternative to fossil fuel-based processes. Nonetheless, several challenges remain with the fermentation processes that hinder downstream processing, such as low product titers, the presence of microorganisms, the formation of fermentation by-products, etc. Additionally, BDOs are high-boiling components (180 – 228 ˚C) which may lead to energy-intensive recovery. Consequently, the costs associated with the purification process may substantially increase the total production costs. Despite this limitation, there is still potential for improvement in the downstream processing of the BDOs. Thus, the main objective of this original research is to develop a state-of-the-art large-scale downstream processing design (broth processing capacity of 160 ktonne/y with a production capacity of 11 – 15 ktonne/y) that may be easily adapted to purify 2,3- (case 1), 1,4- (case 2) or 1,3-BDO (case 3) after fermentation and conventional filtration and ion-exchange steps.
In all three cases, Aspen Plus was employed as a computer-aided process engineering (CAPE) tool to design BDO purification processes, whereby rigorous simulations were performed for all process operations. Data from the published literature was used to obtain realistic compositions of fermentation broths. Generally, concentrations of BDO and water are 7-9 and 87-91 wt%, respectively, while both light and heavy impurities are present. The developed process design includes an initial preconcentration step in a heat pump-assisted vacuum distillation column to remove most of the water and some light impurities (e.g. ethanol). This step allows a significant reduction in total energy requirements and equipment size in the final purification step. The heart of the developed process is an integrated dividing-wall column that effectively purifies high-purity BDO (>99.4 wt% in all cases) from the remaining light (formic and acetic acids, etc.) and heavy impurities (succinic and lactic acids, glucose, etc.).
Finally, a single process design was proven to cost-effectively (0.252 – 0.291 $/kgBDO) and energy-efficiently (1.667 – 2.075 kWthh/kgBDO) recover over 99% of BDO from different fermentation processes. Implementation of the advanced process intensification and heat integration techniques reduced energy requirements by over 33% compared to the existing literature. Furthermore, the adaptable purification process offers flexibility in developing sustainable business models. Lastly, the results of this novel work highlight the importance of using CAPE tools in developing competitive bioprocesses by demonstrating that computer-aided simulations may play a crucial role in advancing sustainable industrial fermentation.
|