Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Plenary 3 (Part 1): Advances in deep learning and applications to point cloud analysis
Time:
Friday, 08/Sept/2023:
9:15am - 10:15am

Session Chair: Dr Emily Lines, University of Cambridge
Session Chair: Dr Stefano Puliti, NIBIO
Location: Jeffery Hall, IoE


Meeting ID: 915 0370 0141 Passcode: 862383

Session Abstract

Keynote by Stefano Puliti


Show help for 'Increase or decrease the abstract text size'
Presentations

TreeD_species: benchmarking sensor-agnostic tree species classification using proximal laser scanning (TLS, MLS, ULS) and CNNs

Stefano Puliti

NIBIO, Norway

The TreeD_species benchmark dataset is an exciting new resource for researchers and practitioners interested in sensor-agnostic tree species classification using proximal laser scanning methods. This dataset was compiled through a collaborative effort that sourced data from open databases and contributions from scientists. The data consists of both aerial (drone) and terrestrial laser scanning data that have been segmented into individual trees and labeled with their respective species information.

The dataset contains approximately 20,000 trees from 30 different species, making it an ideal resource for testing and evaluating machine learning algorithms for point cloud classification tasks. As part of the COST Action 3DForEcoTech, the dataset was used to benchmark different deep learning architectures, including image-based methods like YOLOv5 and SimpleView, as well as point cloud methods like MLP-Mixer, PointNet++, and PointAugment + DGCNN.

Initial results show that image-based methods outperformed point cloud-based methods, with F1 scores of 0.78 and 0.76 for YOLOv5 and SimpleView, respectively, compared to F1 scores of 0.7 for MLP-Mixer and PointNet++, and 0.67 for PointAugment + DGCNN. These results highlight the potential of image-based methods for tree species classification and demonstrate the value of the TreeD_species dataset for benchmarking and evaluating machine learning algorithms.

The TreeD_species dataset will be publicly released in the future, providing a valuable resource for researchers and practitioners interested in developing and testing new methods for tree species classification using proximal laser scanning. Overall, this work shows the benfits of adopting such a collaborative effort to compile a one-of-a-kind tree database for method developement and benchmarking.



Efficient 3D Forest Point Cloud Data Processing: Self-Supervised Learning Strategies to Diminish Manual Labeling

Matthew J. Allen1, Adam Noach1, Harry J. F. Owen1, Stuart W. D. Grieve2, Emily R. Lines1

1Department of Geography, University of Cambridge; 2School of Geography, Queen Mary University of London

Recent advances in ground and air-based laser scanning have enabled the capture of vast quantities of three dimensional forest data at very high resolution, but the large-scale use of this data is limited by reliance on intensive manual data processing. Deep-learning based methods show great promise towards automated processing of such data, but are traditionally limited by a similarly burdensome labelling process.

Developments in self-supervised learning may offer a solution. By pretraining models on a pretext task, for which labels can be constructed entirely from the source data without the need for ground truth information, strong parameter initialisations can be obtained for a huge range of downstream tasks, diminishing the number of hand-labels required to achieve the same performance as in the fully supervised case. Here, we explore the use of masked autoencoding - a pretext task based on reconstructing masked elements of the original data - for pretraining neural networks used to process forest point cloud data. We demonstrate its performance when fine tuned on two common downstream tasks - tree species classification and leaf-wood segmentation - and examine the degree to which this scheme can reduce the need for practitioners to perform manual labelling.



The identification and segmentation of knot clusters and sawlogs in standing timber using terrestrial laser scanning and deep learning

Mika Pehkonen1,2,3, Jiri Pyörälä1, Markus Holopainen1, Juha Hyyppä2, Mikko Vastaranta3

1University of Helsinki; 2Finnish Geospatial Research Institute; 3University of Eastern Finland

More in-depth information about the properties and quality of wood in standing timber would enhance the forest management and wood procurement. The number, size, and morphology of branches are important factors to the quality of the wood material as the knots affect the strength and appearance of the sawn timber. Different sawlog types are bucked below the crown, as well as within the dead and live crown, with implications to their knottiness. Automatic, fast, and robust segmentation and detection of branch whorls and the sawlog types from laser-scanned data are needed to enable large-scale inventories of tree quality, e.g., in the industrial wood procurement.

In this study we assessed how a two-dimensional (2-D) deep learning object detection algorithm performed in the knot cluster detection and the segmentation of different sawlog types from static terrestrial laser scanning (TLS) data. Our material consisted of 476 Norway spruces (Picea Abies H. Karst.) from Southern Finland. The trees were laser-scanned, felled, and bucked into logs. The logs were X-rayed in a commercial sawmill to acquire references on the log dimensions, and the knottiness.

Using the object detection algorithm YOLOv5 (You Only Look Once), we trained models to detect the branch whorls and different types of sawlogs (butt, middle, and top log) from orthographic images taken from the TLS point clouds at multiple viewing angles. We compared the predictions of the models to the X-rayed reference measurements.

Based on the initial results, the 2-D object detection method reliably estimated the number of visible whorls on the stem (Precision 0.72, Recall 0.745). The detection of the sawlogs was at a moderate level (Precision 0.4, Recall 0.728) with a strict IoU (Intersection Over Union) of 0.8. Based on the segmentation, the sawlog volumes and bucking heights were estimated with relative root-mean-squared errors of 27% and 35%, respectively.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SilviLaser 2023
Conference Software: ConfTool Pro 2.6.149
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany