Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 4th Dec 2022, 05:37:15pm UTC

Only Sessions at Location/Venue 
Session Overview
Tutorials - Track 3
Wednesday, 07/July/2021:
7:00am - 11:59pm

Show help for 'Increase or decrease the abstract text size'
7:00am - 10:00am
ID: 311 / 3-Tut: 1
Topics: Algorithms, Data mining / Machine learning / Deep Learning and AI
Keywords: Interpretable Machine Learning, Explainable Artificial Intelligence, Machine Learning, Fairness, Responsible Machine Learning

Introduction to Responsible Machine Learning

Anna Kozak, Hubert Baniecki, Przemyslaw Biecek, Jakub Wisniewski

- Language: English

- Duration: 180

- N° Participants: 150

- Level: Beginner

What? The workshop focuses on responsible machine learning, including areas such as model fairness, explainability, and validation.

Why? To gain the theory and hands-on experience in developing safe and effective predictive models.

For who? For those with basic knowledge of R, familiar with supervised machine learning and interested in model validation.

What will be used? We will use the DALEX package for explanations, fairmodels for checking bias, and modelStudio for interactive model analysis.

Where? 100% online

When? Wednesday, 7th of July, 7:00 - 10:00 am (UTC)

10:00am - 10:15am
ID: 332 / 3-Tut: 2


useR! 2021

10:15am - 2:15pm
ID: 312 / 3-Tut: 3
Topics: Spatial analysis, Data visualisation

Entry level R maps from African data - French - English

Andy South, Anelda van der Walt, Ahmadou Dicko, Shelmith Kariuki, Laurie Baker

- Language: French - English

- Duration: 240

- N° Participants: 60

- Level: Beginner

This tutorial will provide an introduction to mapping and spatial data in R using African data. By the end of the tutorial, you should be able to make a map that is useful to you from data that you have brought yourselves. We will focus on developing confidence in doing the basics really well in preference to straying too far into more advanced analyses. Our tutorials focus on flexible workflows that you can take away. You will also learn how to spot and avoid common pitfalls. The training will be partly based around a set of interactive learnr tutorials that we have

created as part of the afrilearnr package ( and accompanying online demos described in this blog post :

The tutorial will be available on shinyapps for those that are unable to install locally. There will be separate English & French language groups with dedicated materials. Each group will start together for the first few sessions and then break into sub-groups of up to 10 learners with one trainer each for improved feedback and discussion. Towards the end of the tutorial we will challenge you to make a map using data that you have brought or found.

Each language group will come back together for a final wrapup session.

2:15pm - 2:30pm
ID: 333 / 3-Tut: 4


useR! 2021

2:30pm - 5:30pm
ID: 313 / 3-Tut: 5
Topics: Community and Outreach, Reproducibility, Other

How to build a package with "Rmd First" method

Sébastien Rochette, Emily Riederer


- Language: English

- Duration: 180

- N° Participants: 30

- Level: Intermediate

"Rmd First" method can reduce mental load when building packages by keeping users in a natural environment, using a tool they know: a RMarkdown document. The step between writing your own R code to analyze some data and refactoring it into a well-documented, ready-to-share R package seems unreachable to many R users. The package structure is sometimes perceived as useful only for building general-purpose tools for data analysis to be shared on official platforms. However, packages can be used for a broader range of purposes, from internal use to open-source sharing. Because packages are designed for robustness and enforce helpful standards for documentation and testing, the package structure provides a useful framework for refactoring analyses and preparing them to go into production. The following approach to write a development or an analysis inside a Rmd, will significantly reduce the work to transform a Rmd into a package : - _Design_ : define the goal of your next steps and the tools needed to reach them - _Prototype_ : use some small examples to prototype your script in Rmd - _Build_ : Build your script as functions and document your work to be able to use them, in the future, on real-life datasets - _Strengthen_ : Create tests to assure stability of your code and follow modifications through time - _Deploy_ : Transform as a well-structured package to deploy and share with your community During this tutorial, we will work through the steps of Rmd Driven Development to persuade attendees that their experience writing R code means that they already know how to build a package. They only need to be in a safe environment to find it out, which will be what we propose. We will take advantage of all existing tools such as {devtools}, {testthat}, {attachment} and {usethis} that ease package development from Rmd to building a package. The recent package [{fusen}](, which "inflates a package from a simple flat Rmd", will be presented to further reduce the step between well-designed Rmd and package deployment. Attendees will leave this workshop having built their first package with the "Rmd First" method and with the skills and tools to build more packages on their own.

5:30pm - 5:45pm
ID: 334 / 3-Tut: 6


useR! 2021

5:45pm - 8:30pm
ID: 314 / 3-Tut: 7
Topics: Bayesian models, Statistical models

Bayesian modeling in R with {rstanarm} - Spanish

Fernando Antonio Zepeda Herrera

- Language: Spanish

- Duration: 165

- N° Participants: 30

- Level: Intermediate

This tutorial would introduce Bayesian modeling in R particularly through {rstanarm}. We would alternate between "lectures" and "practical" examples (with {learnr} tutorials). Starting with a brief introduction of the Bayesian paradigm, we would cover linear and generalized linear regression as well as useful diagnostics and posterior visualization.

8:30pm - 8:45pm
ID: 335 / 3-Tut: 8


useR! 2021

8:45pm - 11:45pm
ID: 315 / 3-Tut: 9
Topics: Big / High dimensional data, R in production

Introduction to TileDB for R

Dirk Eddelbuettel, Aaron Wolen

- Language: English

- Duration: 180 mn

- N° Participants: 200

- Level: Intermediate

TileDB is an open source universal data engine that natively supports dense and sparse multidimensional arrays, as well as data frames. Large datasets can be stored on multiple backends ranging from a local filesystem to cloud storage providers such as Amazon S3 (as well Google Cloud Storage and Azure Cloud Storage) and accessed using almost any language, including Python and R. The tutorial introduces the 'tiledb' R package on CRAN, which allows users to efficiently operate on large dense/sparse arrays using familiar R techniques and data structures. It also offers key features of the underlying TileDB Embedded library: parallelised read and write operations, multiple compression formats, time traveling (i.e., the ability to recover data stored at previous timepoints), flexible encryption, and Apache Arrow support. Several simple usage examples will be provided and you will have an opportunity to follow along on your laptops. One or two fuller usage examples from Bioinformatics will serve as a more extended case study.

We will illustrate how TileDB can be used to create a performant data store for results produced by Genome-Wide Association Studies, and demonstrate the BioConductor package, TileDBArray, which is built on top of the DelayedArray framework and has shown excellent performance relevant to existing (hdf5-based) solution. Finally, usage of TileDB with cloud storage providers will be illustrated. This covers both direct reads and writes to, for example, Amazon S3 as well as a brief illustration of the 'pay-as-you-go' Software-as-a-Service offering of

TileDB Cloud with its additional features.