Conference Agenda (All times are shown in EDT)

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Paper Session 06: Personal Information Systems and Recommender Systems [SDGs 1-17]
Time:
Monday, 26/Oct/2020:
9:00am - 10:30am

Session Chair: Sanda Erdelez, Simmons University, United States of America

Show help for 'Increase or decrease the abstract text size'
Presentations
9:00am - 9:15am
ID: 193 / PS-06: 1
Long Papers
Topics: Human Computer Interaction (HCI)
Keywords: Intelligent Personal Assistants, Conversational Agents, Humor, Amazon Mechanical Turk

Assessing User Reactions to Intelligent Personal Assistants’ Humorous Responses

Irene Lopatovska, Elena Korshakova, Tracy Kubert

Pratt Institute, USA

The paper reports on the study that developed a classification of humorous interactions with intelligent personal assistants (IPAs) and applied the classification to compare four IPAs, Apple Siri, Google Assistant, Amazon Alexa and Microsoft Cortana, on their humor performance. The study relied on volunteer participants recruited through traditional academic channels (e.g. mailing lists) as well as Amazon Mechanical Turk (AMT). While AMT and non-AMT participants differed on some demographic characteristics, their overall ratings of IPA humor were not significantly different and were analyzed jointly using descriptive and inferential statistics. The results revealed that Apple Siri and Google Assistant received higher average ratings on humorous IPA responses compared to Amazon Alexa and Microsoft Cortana. IPA responses to joke requests were judged as being funnier than IPA responses to questions related to the IPA’s personality, rhetorical statements and other humor types. Consistent with previous studies on humor, our findings did not demonstrate strong relationships between select user demographics (age, gender and humor style) and their ratings of humorous IPA responses.



9:15am - 9:30am
ID: 203 / PS-06: 2
Long Papers
Topics: Domain-Specific Informatics
Keywords: Exploratory Search, Recommendation, Knowledge Graph, User Profile

Grapevine: A Profile-Based Exploratory Search and Recommendation System for Finding Research Advisors

Behnam Rahdari1, Peter Brusilovsky1, Dmitriy Babichenko1, Eliza Beth Littleton2, Ravi Patel3, Jaime Fawcett1, Zara Blum1

1School of Computing and Information, University of Pittsburgh, USA; 2Department of Surgery, University of Pittsburgh School of Medicine, USAted States of America; 3School of Pharmacy, University of Pittsburgh, USA

Finding research advisors is an important and challenging task for college students. On one hand, a research advisor that matches student interests and past preparation could fully engage the student with an exciting and productive research experience. On the other hand, students are frequently unable to formulate their interests and experience in a way that allows them to independently find the most compatible advisors using search and browsing tools. This paper reports our experience with designing and evaluating the Grapevine, an exploratory search and recommender system aimed at helping students, especially those less prepared and underrepresented in certain fields, find research advisors.



9:30am - 9:40am
ID: 192 / PS-06: 3
Short Papers
Topics: Research Methods
Keywords: Amazon Mechanical Turk, AMT, Crowdsourcing, Data Collection, Participants Recruiting, Research Methods, Questionnaires

Mechanical Turk or Volunteer Participant? Comparing the Two Samples in the Study of Intelligent Personal Assistants

Irene Lopatovska, Elena Korshakova

Pratt Institute, USA

A challenge in academic and practitioner research is recruiting study participants that match target demographics, possess a desired skillset, and will participate for little to no compensation. An alternative to traditional participant recruitment struggles is crowdsourcing participants through online labor markets, such as Amazon Mechanical Turk (AMT). AMT is a platform that provides the tool for finding and recruiting participants with diverse demographics, skills, and experiences. This paper aims to demystify the use of crowdsourcing, and particularly AMT, by comparing the performance of traditionally recruited volunteers and AMT participants on tasks related to the evaluation of intelligent personal assistants (IPAs such as Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana). The comparison of AMT and non-AMT samples indicated that while the two samples differed on demographics, their task performance was not significantly different. The paper discusses the costs and benefits of using AMT samples and would be of particular relevance to researchers who employ questionnaires and/or task-specific data collection methods in their work.



9:40am - 9:50am
ID: 278 / PS-06: 4
Short Papers
Topics: Library and Information Science
Keywords: Video games, Metadata, Qualitative Analysis, Text Mining, Plot/Narrative

Human Versus Machine: Analyzing Video Game User Reviews for Plot and Narrative

Hyerim Cho1, Jenny S. Bossaller1, Denice Adkins1, Jin Ha Lee2

1University of Missouri, USA; 2University of Washington, USA

Video game users have shown strong interests in having subject metadata to find games. However, creating and maintaining subject metadata is costly and difficult. This study explores the utility of an automated approach for generating subject metadata for video games, focusing on plot and narrative. By comparing two methods to analyze the reviews—qualitative analysis conducted by a human researcher vs. automated text analysis using topic modeling—the researchers investigate if an automated method can generate subject terms that are comparable to the ones generated by qualitative analysis. Findings suggest that even with a smaller set of sample dataset, qualitative analysis could create a better set of terms than automated text analysis. However, terms generated from the automated text analysis indicate that its capability to retrieve themes of the video game may be useful in future libraries.



9:50am - 10:00am
ID: 340 / PS-06: 5
Short Papers
Topics: Data Science; Analytics; and Visualization
Keywords: Funding, Grant Recommendation System, Learning to Rank, Information Retrieval

GotFunding: A Grant Recommendation System Based on Scientific Articles

Tong Zeng1,2, Daniel Acuna2

1Nanjing University, People's Republic of China; 2Syracuse University, USA

Obtaining funding is an important part of becoming a successful scientist. Junior faculty spend a great deal of time finding the right agencies and programs that best match their research profile. But what are the factors that influence the best publication–grant matching? Some universities might employ pre-award personnel to understand these factors, but not all institutions can afford to hire them. Historical records of publications funded by grants can help us understand the matching process and also help us develop recommendation systems to automate it. In this work, we present GotFunding (Grant recOmmendaTion based on past FUNDING), a recommendation system trained on National Institutes of Health's (NIH) grant–publication records. Our system achieves a high performance (NDCG@1 = 0.945) by casting the problem as learning to rank. By analyzing the features that make predictions effective, our results show that the ranking considers most important 1) the temporal proximity of the publication to the grant, 2) the amount of information provided in the publication (e.g., document length), and 3) the relevance of the publication to the grant. We discuss future improvements of the system and an online tool for scientists to try.