Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 2nd May 2025, 03:38:07pm CEST

 
 
Session Overview
Session
Prototype-Based Supervised & Unsupervised Learning
Time:
Wednesday, 10/July/2024:
10:15am - 11:55am

Session Chair: Alexander R.T. Gepperth
Location: 39-001

ZMS Bahnhofstr. 15 09648 Mittweida

Show help for 'Increase or decrease the abstract text size'
Presentations

New Cloth unto an Old Garment: SOM for Regeneration Learning

Rewbenio A. Frota1,3, Guilherme A. Barreto2, Marley M.B.R. Vellasco3, Candida Menezes de Jesus1

1PETROBRAS, Brazil; 2Graduate Program in Teleinformatics Engineering, Center of Technology, Federal University of Ceará (UFC), Fortaleza - CE, Brazil; 3Department of Electrical Engineering, Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Rio de Janeiro - RJ, Brazil

A recent paradigm called Regeneration Learning addresses generative problems where the target data (e.g., images) is more complex than the available input source. While current cross-modal representation and regeneration learning rely on supervised deep learning models, this paper aims to revisit the adequacy of unsupervised models in this field. In this regard, we propose a new unsupervised approach that utilizes the SOM as a heteroassociative memory model to learn cross-modal representations in a topologically coherent map. This approach enables bidirectional predictive/regenerative mapping between domains. We evaluate the potential of this method on an unsolved (so far!) practical problem in petroleum geoscience.



Unsupervised Learning-based Data Collection Planning with Dubins Vehicle and Constrained Data Retrieving Time

Jindřiška Deckerová, Jan Faigl

Czech Technical University in Prague, Czech Republic

In remote data collection from sampling stations, a vehicle must be within sufficient distance from the particular station for a predefined minimal time to retrieve all the required data from the site.

The planning task is to find a cost-efficient data collection trajectory, allowing the data collection vehicle to retrieve data from all sensing sites.

Having a fixed-wing aerial vehicle flying with a constant forward velocity, the problem is to determine the shortest feasible path that visits every sensing site and the vehicle is within a reliable communication distance from the station in a sufficient period.

We propose to formulate the planning problem as a~variant of the Close Enough Dubins Traveling Salesman Problem with Time Constraints (CEDTSP-TC) that is heuristically solved by unsupervised learning of the Growing Self-Organizing Array (GSOA) modified to address the minimal required time for the vehicle to be within the communication range of the station.

The proposed method is compared with a baseline based on a sampling-based decoupled approach.

The presented results support the feasibility of both proposed solvers on random instances and show that the GSOA-based approach outperforms the decoupled approach or provides similar results.



Hyperbox Learning Vector Quantization Based on Min-Max-Neurons

Thomas Villmann, Thomas Davies, Alexander Engelsberger

University of Applied Sciences Mittweida, Germany

In this paper we propose the application of min-max-neurons for the use in generalized learning vector quantization (GLVQ) models, which correspond to min-max-prototypes. These prototypes can be identified with hyperboxes in the data space. Keeping the general GLVQ cost function, we redefine the Hebb-responsibilities for min-max-prototypes and derive consistent learning rules for stochastic gradient descent learning. We demonstrate that the resulting hyperbox-based GLVQ is capable to solve several classification tasks in robust manner, which can be dedicated to the use of robust min-max-prototypes. Finally, we give suggestions for future research for GLVQ based on min-max-prototypes.



Sparse Clustering with K-means - Which Penalties and for Which Data?

Marie Chavent1, Marie Cottrell2, Alex Mourer2, Madalina Olteanu3

1Institut de Mathématique de Bordeaux, France; 2Université Paris 1 Panthéon Sorbonne, France; 3Université Paris Dauphine PSL, France

While high dimensionality and the selection of meaningful features is usually a burden in machine learning, it is even more so in the case of unsupervised learning and particularly in clustering. The presence of uninformative features, sometimes correlated, may bias significantly the results of distance-based methods such as k-means for instance. Since the seminal work of Witten et al. (2010), different versions of sparse k-means have been introduced, building on the idea of adding some penalty terms in the loss function and resulting into automatic feature selection and/or weighting. This paper investigates the connections between some of these methods, and particularly the differences induced by the choices of the penalty terms. It also focuses on the case of mixed data, and how they may be handled by the sparse k-means approaches. Eventually, it presents the algorithms and model selection tools made available through a recently implemented R package, vimpclust.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: WSOM+ 2024
Conference Software: ConfTool Pro 2.8.105
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany