Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 2nd May 2025, 03:41:47pm CEST
Variable importance determination refers to the challenge to identify the most relevant input dimensions or features for a given learning task and quantify their relevance, either with respect to a local decision or a global model. Feature relevance determination constitutes a foundation for feature selection, and it enables an intuitive insight into the rational of model decisions. Indeed, it constitutes one of the oldest and most prominent explanation technologies for machine learning models with relevance for both, deep and shallow networks. A huge number of measures have been proposed such as mutual information, permutation feature importance, deep lift, LIME, GMLVQ, or Shapley values, to name just a few.
Within the talk, I will address recent extensions of feature relevance determination, which occur as machine learning models are increasingly used in everyday life. Here, models face an open environment, possibly changing dynamics, and the necessity of model adaptation to account for changes of the underlying distribution. At present, feature relevance determination almost solely focusses on static scenarios and batch training. In the talk, I will target the question of how to efficiently and effectively accompany a model which learns incrementally by feature relevance determination methods(1,2). As a second challenge, features are often not mutually independent, and the relevance of groups rather than single features should be judged. While mathematical models such as Shapley values take feature correlations into account for individual additive feature relevance terms, it is unclear how to efficiently and effectively extend those to groups of features. In the talk, I will discuss novel methods for the efficient computation of feature interaction indices (3,4).
(1) F Fumagalli, M Muschalik, E Hüllermeier, B Hammer: Incremental Permutation Feature Importance (iPFI): Towards Online Explanations on Data Streams. Mach. Learn. 112(12): 4863-4903, 2023.
(2) M Muschalik, F Fumagalli, B Hammer, E Hüllermeier: iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams. ECML/PKDD (3) 2023: 428-445.(3) F Fumagalli, M Muschalik, P Kolpaczki, E Hüllermeier, B Hammer: SHAP-IQ: Unified Approximation of Any-Order Shapley Interactions. NeurIPS 2023.(4) M Muschalik, F Fumagalli, B Hammer, E Hüllermeier: Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles AAAI 2024.