Conference Agenda
• Please select a date or location to show only sessions at that day or location. • Please select a single session for detailed view such as the presentation order, authors’ information and abstract etc. • Please click ‘Session Overview’ to return to the overview page after checking each session.
|
Session Overview |
| Session | ||
Tech. Session 3-7. ML for Critical Heat Flux - III
| ||
| Presentations | ||
10:20am - 10:45am
ID: 1109 / Tech. Session 3-7: 1 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: Critical heat flux, Large language model, AI-Agent, Bayesian optimization, Uncertainty of ML model A Comparative Study of Large Language Model Agents for Data-Driven Critical Heat Flux Prediction Texas A&M University, United States of America In this work, we compare human-developed and Artificial Intelligence (AI)-generated models for predicting Critical Heat Flux (CHF) in nuclear reactor safety analysis. This study harnesses AI and Machine Learning (ML) to develop predictive models that learn from experimental data, specifically using the extensive NRC CHF database. We compare human-developed models optimized via deep ensemble methods and Bayesian optimization with AI-agent-developed models using large language models (LLMs). The human models use a Gaussian distribution approach for predictions, with uncertainty quantified through variance. Bayesian optimization refines hyperparameters such as learning rate and batch size, enhancing prediction accuracy measured by Root Mean Square Error (RMSE). In contrast, an AI agent system, developed using a Large Language Model (LLM), autonomously created CHF predictive models with a neural network architecture. The LangChain suite facilitates system interactions, the execution of Python scripts, and task management through LangSmith and LangGraph, simulating a multi-agent system for an automated workflow that encompasses model development, training, and evaluation. The performance comparison between the human and AI-developed models focuses on prediction accuracy, uncertainty quantification, and computational efficiency. The AI models demonstrated performance comparable to that of human-optimized models, showcasing their potential to automate nuclear safety analysis tasks. This study highlights the promise of AI in enhancing nuclear reactor safety analysis. Future work should focus on integrating AI models with advanced simulation tools and expanding their application to broader safety analysis cases, including transients. 10:45am - 11:10am
ID: 1264 / Tech. Session 3-7: 2 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: Critical Heat Flux, Active Learning, Variational Inference, Bayesian Neural Networks, Digital Twins Aided Active Learning (AAL) for Enhanced Critical Heat Flux Prediction 1University of Michigan, United States of America; 2Idaho National Laboratory, United States of America Accurate prediction of Critical Heat Flux (CHF) is crucial for the safe and efficient operation of nuclear reactors. Traditional CHF modeling methods often require extensive experimental data and are computationally expensive. In this work, we propose a novel approach to CHF prediction that combines active learning with Variational Inference (VI) in a Bayesian Feedforward Neural Network (BFNN) setting. By utilizing the uncertainty quantification inherent in Variational Inference, the most informative data points can be strategically chosen to incrementally train the model, thereby minimizing the computational cost as well as the data required for accurate predictions. VI is less expensive than other Bayesian inference methods, making it a feasible option for active learning with neural networks BFNN begins with a small subset of training data and applies the reparameterization trick to approximate the posterior distribution of model weights. As new data is strategically selected based on uncertainty, the network updates its posterior distribution, improving accuracy while staying computationally efficient. This active learning framework prioritizes areas of high uncertainty, reducing data requirements and speeding up the learning process. We evaluate our method on a CHF dataset, demonstrating substantial improvements in performance compared to traditional approaches. The framework is particularly suited for digital twins of nuclear reactors, where real-time updates and efficient learning from sparse data are essential. We aim to assess the performance using Mean Absolute Percentage Error (MAPE) and R² on a test set to show that our variational approach will achieve comparable accuracy and prediction quality at much lower data. 11:10am - 11:35am
ID: 1623 / Tech. Session 3-7: 3 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: Critical Heat Flux (CHF), COBRA-TF, Machine Learning, Heat Transfer Evaluation of the Machine Learning CHF Model Enhanced COBRA-TF Prediction Performance University of Missouri, United States of America This study aims to enhance the prediction accuracy and expand the practical applicability of critical heat flux (CHF) calculations by integrating the thermal-hydraulic sub-channel analysis code COBRA-TF with machine learning techniques. A machine learning model was trained using the 2006 Groeneveld Lookup Tables released by the Nuclear Regulatory Commission (NRC), offering a comprehensive reference dataset for CHF prediction. Key input parameters required by the ML model include system pressure, mass flux of the working fluid, and critical quality, ensuring an accurate representation of thermal-hydraulic conditions. For COBRA-TF performance testing, 200 independent calculations were performed and assessed. The CHF values in these scenarios range from 400 to 4000 kW/m², providing a broad spectrum of conditions to validate the ML CHF model's performance. Comparative results show that, while all models demonstrated relatively good predictive performance, the machine learning-coupled COBRA-TF model significantly outperforms the standalone COBRA-TF predictions. This improvement is evidenced by a reduction in mean absolute error (MAE) from 161.64 to 117.58 (27% error reduction) and a decrease in root mean square error (RMSE) from 231.74 to 175.65 (24% error reduction). These findings highlight the ML-enhanced COBRA-TF model’s advanced predictive capability, presenting it as a reliable and versatile tool with potential for broader applications across diverse thermal-hydraulic environments. 11:35am - 12:00pm
ID: 1640 / Tech. Session 3-7: 4 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: Critical Heat Flux, Machine Learning, Uncertainty Quantification, Hybrid Models Prediction of Critical Heat Flux with Hybrid Machine Learning: Uncertainty Quantification and CTF Deployment 1North Carolina State University, United States of America; 2University of Tennessee, Knoxville, United States of America; 3Oak Ridge National Laboratory, United States of America In light water reactors, critical heat flux (CHF) is a thermal limit at which a boiling crisis occurs, marking the onset of departure from nucleate boiling (DNB) or dryout (DO). Several ML methods have been studied to predict CHF, but purely data-driven approaches struggle with interpretation, data limitations, and lack of physical context. This study builds on a hybrid approach that incorporates knowledge-based empirical correlations. Three ML techniques were evaluated in predicting correlation-measurement residuals and quantifying model uncertainties: deep neural network ensembles (DNNs), Bayesian neural networks (BNNs), and deep Gaussian processes (DGPs). These models were implemented using the public CHF dataset from the 2006 Groeneveld lookup table, focusing on cases of DO. Two training sizes were considered: a nominal case (80% of the original dataset) and a throttled case (0.1%). Hybrid DNN ensembles outperformed pure ML models and other methods, particularly in throttled cases, maintaining metrics below standalone correlations. They exhibited high confidence with low variability in predictions. BNNs showed similar results but with higher relative standard deviation and slightly elevated errors. Hybrid models resisted performance degradation with limited data, though errors were higher than bare correlations. DGPs had the least favorable metrics but small uncertainties in nominal cases. This methodology was then implemented in the thermal hydraulic code CTF as a first proof of implementation. Overall, these hybrid approaches were shown to offer a high degree of accuracy with low uncertainties, in addition to having a more interpretable basis compared to purely data-driven CHF modeling approaches. 12:00pm - 12:25pm
ID: 1719 / Tech. Session 3-7: 5 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: Neural Network, Data Augmentation, Critical Heat Flux, Regression, Interpretability A Data-Driven Approach to Critical Heat Flux: An ML-Based Method 1UNIBO/ENEA, Italy; 2ENEA, Italy The development of an accurate model to predict Critical Heat Flux (CHF) is essential for advancing nuclear power technology, where safety and efficiency are paramount. In this context, a Machine Learning (ML)-based model has been constructed based on the latest released NEA benchmark dataset on CHF. Comprehensive analyses have been conducted on feature selection, extraction, and features engineering to enhance model learning capacity. Additionally, a data augmentation process incorporating background noise was employed to increase robustness. Preliminary results indicate that this purely data-driven machine learning architecture, an 8-layer feedforward neural network with batch normalization and optimized dropout layers, outperforms traditional empirical models and lookup tables in regression tasks. The network leverages hidden data relationships for improved accuracy, suggesting that ML approaches could offer a more adaptable and precise tool for predicting the CHF, which is valuable in optimizing reactor cooling system design and operation. Future work could explore integrating physics-informed neural networks (PINNs) to blend data-driven insights with established physical laws, potentially enhancing model reliability and interpretability. Additionally, the inclusion of pretrained models could offer a powerful baseline, enabling the framework to leverage previously learned features and patterns, which may reduce computational costs and improve generalizability. Furthermore, applying explainability techniques like SHAP or LIME could provide critical insights into feature importance, helping refine feature engineering and model interpretability. | ||
