9:00am - 9:15amID: 193
/ PS-06: 1
Topics: Human Computer Interaction (HCI)Keywords: Intelligent Personal Assistants, Conversational Agents, Humor, Amazon Mechanical Turk
Assessing User Reactions to Intelligent Personal Assistants’ Humorous Responses
Pratt Institute, USA
The paper reports on the study that developed a classification of humorous interactions with intelligent personal assistants (IPAs) and applied the classification to compare four IPAs, Apple Siri, Google Assistant, Amazon Alexa and Microsoft Cortana, on their humor performance. The study relied on volunteer participants recruited through traditional academic channels (e.g. mailing lists) as well as Amazon Mechanical Turk (AMT). While AMT and non-AMT participants differed on some demographic characteristics, their overall ratings of IPA humor were not significantly different and were analyzed jointly using descriptive and inferential statistics. The results revealed that Apple Siri and Google Assistant received higher average ratings on humorous IPA responses compared to Amazon Alexa and Microsoft Cortana. IPA responses to joke requests were judged as being funnier than IPA responses to questions related to the IPA’s personality, rhetorical statements and other humor types. Consistent with previous studies on humor, our findings did not demonstrate strong relationships between select user demographics (age, gender and humor style) and their ratings of humorous IPA responses.
9:15am - 9:30amID: 203
/ PS-06: 2
Topics: Domain-Specific InformaticsKeywords: Exploratory Search, Recommendation, Knowledge Graph, User Profile
Grapevine: A Profile-Based Exploratory Search and Recommendation System for Finding Research Advisors
1School of Computing and Information, University of Pittsburgh, USA; 2Department of Surgery, University of Pittsburgh School of Medicine, USAted States of America; 3School of Pharmacy, University of Pittsburgh, USA
Finding research advisors is an important and challenging task for college students. On one hand, a research advisor that matches student interests and past preparation could fully engage the student with an exciting and productive research experience. On the other hand, students are frequently unable to formulate their interests and experience in a way that allows them to independently find the most compatible advisors using search and browsing tools. This paper reports our experience with designing and evaluating the Grapevine, an exploratory search and recommender system aimed at helping students, especially those less prepared and underrepresented in certain fields, find research advisors.
9:30am - 9:40amID: 192
/ PS-06: 3
Topics: Research MethodsKeywords: Amazon Mechanical Turk, AMT, Crowdsourcing, Data Collection, Participants Recruiting, Research Methods, Questionnaires
Mechanical Turk or Volunteer Participant? Comparing the Two Samples in the Study of Intelligent Personal Assistants
Pratt Institute, USA
A challenge in academic and practitioner research is recruiting study participants that match target demographics, possess a desired skillset, and will participate for little to no compensation. An alternative to traditional participant recruitment struggles is crowdsourcing participants through online labor markets, such as Amazon Mechanical Turk (AMT). AMT is a platform that provides the tool for finding and recruiting participants with diverse demographics, skills, and experiences. This paper aims to demystify the use of crowdsourcing, and particularly AMT, by comparing the performance of traditionally recruited volunteers and AMT participants on tasks related to the evaluation of intelligent personal assistants (IPAs such as Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana). The comparison of AMT and non-AMT samples indicated that while the two samples differed on demographics, their task performance was not significantly different. The paper discusses the costs and benefits of using AMT samples and would be of particular relevance to researchers who employ questionnaires and/or task-specific data collection methods in their work.
9:40am - 9:50amID: 278
/ PS-06: 4
Topics: Library and Information ScienceKeywords: Video games, Metadata, Qualitative Analysis, Text Mining, Plot/Narrative
Human Versus Machine: Analyzing Video Game User Reviews for Plot and Narrative
1University of Missouri, USA; 2University of Washington, USA
Video game users have shown strong interests in having subject metadata to find games. However, creating and maintaining subject metadata is costly and difficult. This study explores the utility of an automated approach for generating subject metadata for video games, focusing on plot and narrative. By comparing two methods to analyze the reviews—qualitative analysis conducted by a human researcher vs. automated text analysis using topic modeling—the researchers investigate if an automated method can generate subject terms that are comparable to the ones generated by qualitative analysis. Findings suggest that even with a smaller set of sample dataset, qualitative analysis could create a better set of terms than automated text analysis. However, terms generated from the automated text analysis indicate that its capability to retrieve themes of the video game may be useful in future libraries.
9:50am - 10:00amID: 340
/ PS-06: 5
Topics: Data Science; Analytics; and VisualizationKeywords: Funding, Grant Recommendation System, Learning to Rank, Information Retrieval
GotFunding: A Grant Recommendation System Based on Scientific Articles
1Nanjing University, People's Republic of China; 2Syracuse University, USA
Obtaining funding is an important part of becoming a successful scientist. Junior faculty spend a great deal of time finding the right agencies and programs that best match their research profile. But what are the factors that influence the best publication–grant matching? Some universities might employ pre-award personnel to understand these factors, but not all institutions can afford to hire them. Historical records of publications funded by grants can help us understand the matching process and also help us develop recommendation systems to automate it. In this work, we present GotFunding (Grant recOmmendaTion based on past FUNDING), a recommendation system trained on National Institutes of Health's (NIH) grant–publication records. Our system achieves a high performance (NDCG@1 = 0.945) by casting the problem as learning to rank. By analyzing the features that make predictions effective, our results show that the ranking considers most important 1) the temporal proximity of the publication to the grant, 2) the amount of information provided in the publication (e.g., document length), and 3) the relevance of the publication to the grant. We discuss future improvements of the system and an online tool for scientists to try.