Conference Agenda
• Please select a date or location to show only sessions at that day or location. • Please select a single session for detailed view such as the presentation order, authors’ information and abstract etc. • Please click ‘Session Overview’ to return to the overview page after checking each session.
|
Session Overview |
| Session | ||
Tech. Session 2-9. ML for Critical Heat Flux - II
| ||
| Presentations | ||
4:00pm - 4:25pm
ID: 1273 / Tech. Session 2-9: 1 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: PINNs, DNNs, CHF Use of PINNs to Improve CHF Model Behaviour KTH, Sweden The use of standard deep neural networks (DNNs) have been shown to have better predictive capability than the look-up table (LUT) method to predict Critical Heat Flux values based on input parameters. However, recent work has shown that the model produces unphysical dependence of CHF on the heated length parameter when the heated length parameter is large. We show that this undesired model behaviour is a result of having extremely few data points at high heated length values. One option to resolve this issue is to remove the heated length as an input parameter entirely, but the downside to this is that it removes the possible dependence of CHF on heated length at low heated length values. Consequently, we applied a physics informed neural network (PINN) which penalizes the dependence of CHF on heated length. We scaled this penalty so that it is proportionate to the heated length value. The resultant PINN model had a CHF dependence on heated length only at smaller heated length values and was practically independent of heated length at high heated length values. The PINN model has a lower accuracy on the training data compared to the reference DNN model, which shows that the provided training data strongly implies that there is at least some dependence of CHF on heated length. We studied variants of the penalty term of PINNs and finally obtained a model which had training data accuraries between the LUT method and the reference DNN method. 4:25pm - 4:50pm
ID: 1490 / Tech. Session 2-9: 2 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: Critical heat flux, Probabilistic neural network, Model-informed machine learning, Uncertainty quantification, Interpretable AI Probabilistic/Interpretable Neural Network Frameworks for Flow Boiling CHF Prediction in Circular Tubes 1Korea Institute of Energy Technology (KENTCH), Korea, Republic of; 2Korea Atomic Energy Research Institute (KAERI), Korea, Republic of Despite the tremendous efforts to predict the critical heat flux (CHF), the existing models incorporate remarkable uncertainties due to challenging phenomenological nature and the limited regression feature. An approach applying the artificial intelligence technique for the CHF prediction is expected to overcome the limitations of the conventional methodologies. However, the prediction results by deterministic neural network algorithms, which consist of massive weight/bias matrix in a forms of point values, there are intrinsic concerns in terms of the black-box characteristics, generalization, and reliability for their practical applications. To resolve the concerns inhering in the deterministic approaches, probabilistic neural network frameworks facilitating the quantification of uncertainty and interpretation of their predictions in a wide range of the flow conditions were developed in this study. Three standalone probabilistic neural networks, i.e., Bayesian neural network (BNN), Monte-Carlo dropout (MCD), and Deep ensemble (DE), were constructed to demonstrate the feasibility quantifying the uncertainty information on their CHF prediction. In addition, a series of model-informed neural network architectures, in which the skeptical regression feature in the 2006 CHF look-up table primarily predicts the CHF and neural network models minimize the residual between the actual data and predictions, were developed to improve the generalization capability. The standalone and model-informed deep ensemble frameworks exhibit the best regression and generalization performances providing the aleatoric and epistemic uncertainties in their prediction. Furthermore, influences of the individual parameters and relationships among them are successfully analyzed by application of the interpretable AI technique. 4:50pm - 5:15pm
ID: 1866 / Tech. Session 2-9: 3 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: Critical Heat Flux, Machine Learning, XGBoost, Multi-Layer Perceptron, CHF Lookup Table Critical Heat Flux Prediction in Round Tubes Using AI/ML: A Comparison of XGBoost and MLP Models 1Korea Atomic Energy Research Institute, Korea, Republic of; 2Korea Institute of Energy Technology, Korea, Republic of Critical Heat Flux (CHF) is a key design parameter in water-cooled reactors, directly influencing operational safety margins and economic efficiency. However, accurately predicting CHF remains challenging due to its inherent complexity and uncertainty. This study evaluates the performance of two AI/ML models—XGBoost and Multi-Layer Perceptron (MLP—using the NRC CHF database containing approximately 25,000 data points under uniform heating conditions in round tubes. A robust database splitting methodology was employed to create interpolation and extrapolation datasets for assessing model generalization. Results demonstrated that MLP outperformed XGBoost in interpolation and single-variable extrapolation scenarios. Notably, MLP achieved prediction accuracies comparable to LUT HBM even without explicit training on these data ranges, with improved extrapolation performance driven by feature engineering that transformed the output variable to log(δX). However, MLP exhibited limitations in multi-variable extrapolation regions, with errors approximately three times higher than LUT HBM. In conclusion, this research demonstrates that AI/ML models, particularly MLPs with optimized input-output features can serve as robust alternatives to traditional LUT methods for CHF prediction in round tube geometries. Future work will address multi-variable extrapolation challenges and extend these methodologies to more complex geometries like rod bundles for broader applicability in reactor safety analysis. 5:15pm - 5:40pm
ID: 1178 / Tech. Session 2-9: 4 Full_Paper_Track 7. Digital Technologies for Thermal Hydraulics Keywords: Deep generative models, Diffusion models, Critical heat flux, Data augmentation Evaluating the Performance of Diffusion Models for Scientific Data Augmentation - a Case Study with Critical Heat Flux North Carolina State University, United States of America Deep generative models (DGMs) are powerful deep learning models for generating synthetic but realistic data by learning the underlying distribution of a training dataset. DGMs offer a potential solution to the challenges of data scarcity and data imbalance, which are very common in nuclear engineering as the measurement data is often obtained from costly experiments. Diffusion models (DMs), a relatively new family of DGMs, have demonstrated great potential in data augmentation especially for images and videos. In this work, we explored the effectiveness of DMs in generating scientific data for nuclear engineering applications. Our focus is on evaluating the performance of DMs in generating critical heat flux (CHF) data, using a training dataset that was originally used to develop the 2006 Groeneveld lookup table. The DM is assessed on its ability to capture the correlations between different parameters in the dataset and whether it generates physically meaningful values for each parameter. Additionally, we compared the full joint empirical cumulative distribution functions (ECDFs) of the real and synthetic datasets to evaluate the overall distributional similarity. The results show that DMs successfully generate CHF data by accurately learning the correlations between parameters without producing unphysical samples. The ECDF comparison further confirms that the synthetic data closely matches the measurement data, demonstrating the potential of DMs for data augmentation in nuclear engineering. | ||
