Amy Pavel

Amy Pavel headshot

Email apavel@cs.utexas.edu

Office GDC 3.704

Twitter @amypavel

Publications Google Scholar

Curriculum Vitae PDF

People
Ph.D. Students: Mina Huh, Karim Benharrak, Yi-Hao Peng (co-advised with Jeffrey P. Bigham)
Masters, Undergraduates, and RAs: Ananya G M, Aadit Barua, Akhil Iyer, Yuning Zhang, Doeun Lee, Tess Van Daele, Pranav Venkatesh
Recent Alumni: Daniel Killough, Jalyn Derry, Aochen Jiao, Soumili Kole, Chitrank Gupta

Archived job materials
Research · Teaching

I am an Assistant Professor in the Department of Computer Science at The University of Texas at Austin. I am recruiting Ph.D. students for Fall 2024. If you are a prospective graduate student or postdoc, please feel free to get in touch!

Previously, I was a postdoctoral fellow at Carnegie Mellon University (supervised by Jeff Bigham) and a Research Scientist at Apple. I received my PhD from the department of Electrical Engineering and Computer Science at UC Berkeley, advised by professors Björn Hartmann at UC Berkeley and Maneesh Agrawala at Stanford. My PhD work was supported by an NDSEG fellowship, a departmental Excellence Award, and a Sandisk Fellowship.

I regularly teach a Computer Science class covers the design and development of user interfaces (Introduction to Human-Computer Interaction). Prior versions of this class include CS160 at UC Berkeley in Summer 2018 and CS378 at UT Austin in Spring 2022.


Research Summary

As a systems researcher in Human-Computer Interaction, I embed machine learning technologies (e.g., Natural Language Processing) into new human interactions that I then deploy to test. Using my systems, remote content creators more effectively collaborate, video authors efficiently create accessible descriptions for blind users, and instructors help students to learn and retain key points. To inform future systems that capture what is important to domain experts and people with disabilities, I also conduct and collaborate on in-depth qualitative (e.g., AAC communication, memes) and quantitative studies (e.g., 360° Video, VR Saliency). My long term research goal is to make communication more effective and accessible.

Research Papers

A thumbnail of the ShortScribe pipeline. The pipeline takes the video as input, transcribes it and describes the keyframes, then uses GPT-4 to generate multiple types of descriptions.
ShortScribe: Making Short-Form Videos Accessible with Hierarchical Video Summaries

Tess Van Daele, Akhil Iyer, Yuning Zhang, Jalyn Derry, Mina Huh, Amy Pavel

Conditionally Accepted to CHI 2024

PDF coming soon

A thumbnail of the COMPA interface that features views for the AAC user and their conversational partners.
COMPA: Using Conversation Context to Achieve Common Ground in AAC

Conditionally Accepted to CHI 2024

PDF coming soon

A thumbnail of a table of potential accessibility barriers in VR.
A thumbnail depicting four generated image options of a young chef cooking dinner and their descriptions.
A thumbnail of a livestream paired with a set of descriptions with timestamps.
A thumbnail of the audio visual script that includes a transcript of the video coupled with detected errors in the video footage (e.g., camera blur).
A thumbnail of SlideSpecs featuring numbered slides and audience feedback on the slides.
A thumbnail depicting CrossA11y's computational approach that compares video frames with the corresponding audio.
A thumbnail of Diffscriber's interface for reviewing slide changes.
Three images of Tech Help Desk including the Tech Help Desk classroom, the Tech Help Desk sign on the classroom door and the building entrance.
A thumbnail image of a chart of alt text coverage that appears in the paper.
A thumnbail image of the tutorial lens interface.
A thumnail image of SlideCho's interface for surfacing information in slide videos.
A thumnail image of a semantic exemplar with three corresponding examples.
A slide from a lecture where most text is in black but some of the text has been colored green to represent that it has been spoken by the presenter. There is an image with a squiggly circle brush on the slide, and it has a green border to indicate the presenter described the image.
A YouTube search result augmented with accessibility information. A thumbnail of the video shows a woman in Rome. The rest of the video information reads as follows: Rome, Italy! Tips & Photo Spots by Ruby Keyvani. 5/7 Somewhat Accessible. 76% of the video is speech. The speech is descriptive but contains many visual references (3 per minute). Visual changes occur infrequently (5 shots per minute); few on-screen objects are described (20%).
A photograph taken at a workshop with a person using an AAC device, her close conversational partner, and puppeteers. All participants stand around a table full of craft supplies and are engaged in coversation.
On one side, a set of extended audio description sentences with their corresponding frames, and in the other column a set of inline audio description sentences and the same frames. The shortened descriptions are as follows: `Shots of lavender in a farmers market' is shortened to `Shots of lavender', `Red flowers against a white house and blue sky' is shortened to `Red flowers', `a courtyard and a pool' is not shortened, `Gaby bikes along a path' is shortened to `Close up of french fries', and `Close up of tater tots and french fries' is shortened to `Close up of french fries'
Two example prototypes for making AR apps accessible. A: Foundational Accessibility. Screenshot of a virtual chair with a voice over target around it, a speech bubble shows the app announcing 'Back of chair with blue cushion'. B: Scanning. Screenshot of AR grid overlaid on a coffee table. Speech bubbles show the app announcing 'Found a new horizontal surface' and 'Scanned 2 surfaces totaling 2.3 square meters'.
Screenshot of a tweet by @CDCgov from April 1, 2020 3:55pm: Actions to reduce spread of the virus, such as social distancing, are key to #FlattenTheCurve. 2 of 3 (original tweet link: https://twitter.com/CDCgov/status/1245439600472084486) The tweet contains an image of the common public health infographic about “flattening the curve”, but the tweet did not include alt text for the image. The image shows an example of a common flatten the curve info-graphic. A tall peak indicates the height of the pandemic if left unchecked, and a shorter spread out curve depicts the effects of social distancing efforts.
A thumbnail with illegible examples of tweet images and their corresponding alt text.
A thumbnail with a conversation between a person using an AAC device and two other people.
A thumbnail of a paper figure showing three memes with descriptions.
A thumbnail of a paper figure showing three memes with descriptions.
A thumbnail of a paper figure including nine images with heat maps indicating visual saliency in VR.
A thumbnail of a paper figure indicating three different editing techniques for 360 video: traditional, viewpoint-oriented cuts and active reorientation.
A thumbnail of the VidCrit interface with a video on the left and text critiques of the video on the right.
A thumbnail indicating an overview of the CrowdCrit critique process.
A thumbnail of the Video Digests interface. Video thumbnails displayed alongside short summaries of the video content.

Thesis and Technical Reports

Amy Pavel

PhD in Computer Science, University of California, Berkeley

Advisors: Bjoern Hartmann and Maneesh Agrawala

Additional committee members: Eric Paulos, Abigail De Kosnik

Posters and Workshops

A thumbnail screenshot of a forum people use to report flashing lights.
A thumbnail with an illegible system diagram of Twitter A11y.
A thumbnail showing a line graph of the precision-at-one of an algorithm going up as the number of noteworthy sentences considered rises. After a certain point, the number of noteworthy sentences decreases the precision at one -- indicating that lower quality noteworthy sentences add noise rather than value to the prediction.
A thumbnail indicating an overview of the CrowdCrit critique process.
A thumbnail of the Sifter interface for browsing common sets of image editing commands.

Work

Assistant ProfessorUniversity of Texas at Austin

Department of Computer Science

January 2022

Research Scientist (50% time)Apple Inc

AI/ML

Machine Intelligence Accessibility Group

July 2019January 2022

Postdoctoral Fellow (50% time)Carnegie Mellon University

HCII

Supervised by Professor Jeffrey P. Bigham

January 2019October 2021

Graduate ResearcherUC Berkeley

Visual Computing Lab

Advised by Professors Björn Hartmann and Maneesh Agrawala

September 2013January 2019

Research InternAdobe

Creative Technologies Lab

Advised by Principal Scientist Dan Goldman

Summer 2014, Summer 2015

Undergraduate ResearcherUC Berkeley

BiD Lab, Visual Computing Lab

Advised by Professors Björn Hartmann and Maneesh Agrawala

June 2011September 2013

Teaching

InstructorUT Austin

CS 395T: Human-Computer Interaction Research

Fall 2023

InstructorUT Austin

CS 378: Introduction to Human-Computer Interaction

Spring 2023

InstructorUT Austin

CS 378: Introduction to Human-Computer Interaction

Spring 2022

InstructorUC Berkeley

CS 160: User interface design and development

Summer 2018

Graduate student instructorUC Berkeley

CS 160: User interface design and development

Summer 2017

Student project advisorUC Berkeley

NWMEDIA 190: Making Sense of Cultural Data

Fall 2017

InstructorUC Berkeley

CS Kickstart, intro CS for incoming freshmen women

Summer 2012

TeacherUC Berkeley

Berkeley Engineers and Mentors

2009 - 2010