Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
7C - Teaching, Automation and Reproducibility
Time:
Thursday, 08/July/2021:
9:15am - 10:45am

Session Chair: Earo Wang
Zoom Host: Jyoti Bhogal
Replacement Zoom Host: Matt Bannert
Virtual location: The Lounge #talk_teaching_automation_r
Session Topics:
Reproducibility, Teaching R/R in Teaching

Session Sponsor: Roche
Session Slide

Presentations
9:15am - 9:35am
Talk-Live
ID: 143 / ses-07-C: 1
Regular Talk
Topics: Teaching R/R in Teaching
Keywords: automation

A semi-automatic grader for R scripts

Vik Gopal, Samuel Seah, Viknesh Jaya Kumar

National University of Singapore

My department teaches a class in R. The aims of this class are to teach visualisation and good programming practices in R. Every week, we would attempt to go over as many script submissions as we could, as closely as we could. We would then summarise the feedback verbally to the students.

Due to the increasing class size, we were unable to rigorously go through every single student script every week due to time constraints. As such, we could not identify the common misconceptions that students had. We could not intervene and correct the most critical ones early one in the class. Finally, we were unable to visualise all the visualisations that students created.

Hence we developed an R package to automatically run all student scripts and extract metrics such as run-time and certain code features. The package would also collate all the graphs so that we can see them at one go. We also set up a server for students to test their code before submission, ensuring that we can run their code smoothly.

We can now ensure that every students’ code is run and analysed consistently and reliably. Instead of scrutinising the code, we look through a summary table of features generated for each script. If something looks strange here, we go back to the script. By uploading this table, with comments, to our LMS, we can provide custom feedback for each student. Finally, having such a summary table of features indicates the areas that students need more practice in - it allows us to tailor future homework problems.



9:35am - 9:55am
Talk-Live
ID: 174 / ses-07-C: 2
Regular Talk
Topics: Teaching R/R in Teaching
Keywords: Training, Automation, Systems Administration, Reproducibility, Workflow

Automating bespoke online teaching with R

Rhian Davies

Jumping Rivers, United Kingdom

At Jumping Rivers we deliver over 100 R, Python and Stan training courses each year, engaging with thousands of new learners. The necessity to move to fully online training in March last year meant we quickly had to completely rethink how to deliver R training interactively online. We internally trialled running our usual in-person training just on Zoom - and it really doesn’t work trust us!

We already used R & R Markdown to create all training materials including slides and notes.

However, our new workflow uses R in every step of the way, from creating a bespoke learning environment, to collating feedback and generating certificates.

Upcoming training sessions are stored in Asana. Using a single call from R, we extract the relevant Asana task details and:

* Provide the client with a single URL that contains all necessary information for the course

* Deploy a bespoke virtual training environment with {analogsea}

* Automate password generation with {shiny}

* Track and upload attendance sheets

* Create bespoke Google Documents for code quizzes

* Generate fill-in-the-blank tutor R scripts

* Provide automatic feedback reports for clients with {rmarkdown}, {shiny} & {rtypeform}

* Deliver personalised certificates in {shiny}

* Tag the training materials and VM to enable a completely reproducible set-up

This improves the learning experience as the “small” things are automated and allows the trainer to concentrate on actual training.



9:55am - 10:15am
Talk-Video
ID: 270 / ses-07-C: 3
Regular Talk
Topics: Reproducibility
Keywords: reproducibility, rmarkdown, knitr, report, communication

Extend the functionality of your R Markdown documents

Christophe Dervieux

RStudio

R Markdown is a powerful tool that has quickly grown since its creation. If it can be rather simple to quickly create and maintain a simple reproducible report, it can be more challenging to do advanced customization and dynamic content creation due to the different tools involved (rmarkdown, knitr, Pandoc, LaTeX, ...) and a lot of possible tweaks. And this is increaded all the more if you consider the already widespread and still growing ecosytem surrounding R Markdown.

Helping users to better find and know how to do specific tasks with R Markdown was the main driver for the book "R Markdown Cookbook" (CRC Press). This talks will be based on the content of this book and will present a selection of advanced recipes to go further with a R Markdown document. These examples combines little-known features of some R packages (rmarkdown, knitr) and other tools (Pandoc) to provide flexibility and to extend greatly the functionnality for producing communication product, programmatically and with reproducibility.

This talks will also include last features at the time of the talk included in R Markdown family of packages (rmarkdown, knitr, bookdown, blogdown, ...)



10:15am - 10:35am
Talk-Video
ID: 189 / ses-07-C: 4
Regular Talk
Topics: Teaching R/R in Teaching
Keywords: psychometrics, reliability, item response theory, Shiny, teaching R

Computational aspects of psychometrics taught with R and Shiny

Patricia Martinkova1,2

1Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic; 2Faculty of Education, Charles University, Prague, Czech Republic

Psychometrics deals with the advancement of quantitative measurement practices in psychology, education, health, and many other fields. It covers a number of statistical methods that are useful for the behavioral and social sciences. Among other topics, it includes the estimation of reliability to deal with the omnipresence of measurement error, as well as a more detailed description of item functioning encompassed in item response theory models.

In this talk, I will discuss some computational aspects of psychometrics, and how understanding these aspects may be supported by real and simulated datasets, interactive examples, and hands on methods. I will first focus on reliability estimation and the issue of restricted range, showing that zero may not always be zero. I will then focus on a deeper understanding of the context behind more complex models and their much simpler counterparts. The last example discusses group-specific models and the importance of item-level analysis for situations where differences in overall gains are not apparent but the differences in item gains may be.

I will finally discuss experiences from teaching computational aspects of psychometrics to a diverse group of students from various fields, including statistics, computer science, psychology, education, medicine, and participants from industry. I will discuss the challenges and joys of creating a truly interdisciplinary course.

Link to package or code repository.
https://github.com/patriciamar/ShinyItemAnalysis