Radar chart with the 7 axis of the design space and their ordinal values: Interruption (interruptable, conditional, continuous), Confidence (dynamic, static), Output (implicit, explicit), Selection (single, multiple), Cardinality (one, two, three), Concurrency (sequentual, concurrent), and Notification (always, threshold).

The Design Space of Nonvisual Word Completion

Word completion interfaces are ubiquitously available in mobile virtual keyboards; however, there is no prior research on how to design these interfaces for screen reader users. In addressing this, we propose a design space for nonvisual representation of word completions. The design space covers seven categories aiming to identify challenges and opportunities for interaction design in an unexplored research topic.

Hugo Nicolau, André Rodrigues, André Santos, Tiago Guerreiro, Kyle Montague, João Guerreiro

ASSETS 2019 ‑ In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘19). ACM, New York, NY, USA, 249-261.

Best Paper Nominee

A user with a row of RFID Card in front of him, while holding one close to the RFID reader..

Open Challenges of Blind People using Smartphones

Through a multiple methods approach we identify and validate challenges locally with a diverse set of user expertise and devices, and at scale through the analyses of the largest Android and iOS dedicate forums for blind people. We contribute with a prioritized corpus of smartphone challenges for blind people, and a discussion on a set of directions for future research that tackle the open and often overlooked challenges.

André Rodrigues, Hugo Nicolau, Kyle Montague, João Guerreiro and Tiago Guerreiro

pre-print 2019 ‑ arXiv preprint arXiv:1909.09078.

iCETA: headphones, computer, mirror in the camera, tangible blocks and working area on top of the keyboard.

A Tangible Math Game for Visually Impaired Children

iCETA, an inclusive interactive system for math learning, designed through a set of participatory sessions with visually impaired children and their educators. iCETA supports math learning through the combination of tangible interaction with haptic and auditory feedback.

Ana Cristina Pires, Sebastian Marichal, Fernando Gonzalez-Perilli, Ewelina Bakala, Bruno Fleischer, Gustavo Sansone, Tiago Guerreiro

ASSETS 2019 ‑ 21th International ACM SIGACCESS Conference on Computers and Accessibility. Pittsburgh, PA, USA. October, 2019

Table with code frequencies in the tutorials instructions. Significant differences between sighted and blind people instructions. Sigthed people with 20% of incorrect text, and high number of references to location. Blind people with high number of gesture explanations and navigation cues.

Understanding the Authoring and Playthrough of Nonvisual Smartphone Tutorials

We sought to understand how sighted and blind people instruct other blind users to accomplish tasks on a mobile device. We also studied how those instructions enabled blind people to be successful. Results showed that a single pass of instructions was limited. A set of ways in which support can be provided is discussed.

André Rodrigues, André Santos, Kyle Montague, Hugo Nicolau, Tiago Guerreiro

INTERACT 2019 ‑ 17th IFIP TC13 International Conference on Human-Computer Interaction, Paphos, Cyprus, September, 2019

Photo taken during mathematics training with CETA system using tangibles. In this photo we observe an entire first grade classroom with 22 young children, playing with the blocks using our system CETA. Each child has a headphone, a tablet located on the table and the blocks to solve the additive composition tasks.

Building blocks of mathematical learning: digital and tangible manipulatives lead to different strategies in number composition

It is indispensable that objects may be grasped, lifted and explored or would it be enough to interact with virtual manipulatives? And specifically, how the objects’ affordances (i.e., the possibility to grasp physical objects or drag virtual ones) will shape and constrain children’s composing strategies.

Ana Cristina Pires, Fernando González Perilli, Ewelina Bakała, Bruno Fleisher, Gustavo Sansone and Sebastián Marichal

Frontiers in Education 2019 ‑ Educational Psychology, 2019

10 application screens all with very different interfaces. Interfaces with grids, lists, keyboard, no interactive items, logins, tutorials and tables.

Mobile Web

Accessing the Web with mobile devices, either through a browser or a native application, has become more than a perk; it is a need. Such relevance has increased the need to provide accessible mobile webpages. In this work, we focus our attention on the challenges of mobile devices for accessibility, and how those have been addressed in the development and evaluation of mobile interfaces and contents.

Tiago Guerreiro, Luís Carriço, André Rodrigues

Web Accessibility 2019 ‑ Chapter 38 in S. Harper & Y. Yesilada (eds.), Web Accessibility: A Foundation for Research (2nd ed.). London, England, Springer-Verlag.

Storyboard demonstrating a shower attack.

Vulnerability & Blame: Making Sense of Unauthorized Access to Smartphones

Unauthorized physical access to personal devices by people known to the owner of the device is a common concern, and a common occurrence. But how do people experience incidents of unauthorized access? Using an online survey, we collected 102 accounts of unauthorized access. Participants wrote stories about past situations in which either they accessed the smartphone of someone they know, or someone they know accessed theirs.

Diogo Marques, Tiago Guerreiro, Luís Carriço, Ivan Beschastnikh, Konstantin Beznosov

CHI 2019 ‑ ACM Conference on Human Factors in Computing Systems, Glasgow, Scotland, May, 2019

Caregiving to a person with Alzheimer can be a very demanding task, both from physical and psychological perspectives. Technological responses to support caregiving and improve the quality of life of people with Alzheimer and their caregivers are lacking.

MATY: Designing An Assistive Robot for People with Alzheimer’s

Using a research through design approach, we devised a robot focused on empowering people with Alzheimer and fostering their autonomy, from the initial sketch to a working prototype. MATY is a robot that encourages communication with relatives and promotes routines by eliciting the person to take action, using a multisensorial approach (e.g., projecting biographical images, playing suggestive sounds, or emitting soothing aromas). The paper reports the iterative, incremental design process performed together with stakeholders. We share first lessons learned in this process with HCI researchers and practitioners designing solutions, particularly robots, to assist people with dementia and their caregivers.

Hugo Simão, Tiago Guerreiro

CHI 2019 ‑ In The CHI EA ‘19 Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems


Designing Free-Living Reports for Parkinson’s Disease

The democratization of sensing wearable technologies opened several possibilities in the continuous monitoring of people in their homes. We developed a platform where usable reports are presented to clinicians, particuarly in the context of Parkinson’s disease monitoring. The presented information originates from accelerometer sensors.

Diogo Branco, Raquel Bouça, Joaquim Ferreira, Tiago Guerreiro

CHI 2019 ‑ Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK

An example of a clinicial assessment, sit to stand exercise

DataPark : A Data-Driven Platform for Parkinson’s Disease Monitoring

We developed a platform where usable reports are presented to clinicians, particuarly in the context of Parkinson’s disease monitoring. The presented information originates from accelerometer sensors and subjective data collected over an Interactive Voice Response system.

Diogo Branco, César Mendes, Ricardo Pereira, André Rodrigues, Raquel Bouça, Kyle Montague, Joaquim Ferreira, Tiago Guerreiro

WISH Symposium 2019 ‑ Workgrounp on Interactive Systems in Healthcare, co-located with CHI’19, Glasgow, UK, May, 2019


Main cognitive stimulation activities of Scrapbook. From left to right: a. Reminiscence therapy; b. Flashcard; c. Street View navigation; d. Quiz; e. Touch; f. Puzzle (in fullscreen).

Designing Personalized Therapy Tools For People with Dementia

We iteratively designed a web platform focused on personalized cognitive stimulation. The platform was deployed in clinical contexts for several months and iterated, being enriched with functionalities like group reminiscence, caregiver app, or biographical activities.

Sérgio Alves, Andreia Cordeiro, Filipa Brito, Luís Carriço, Tiago Guerreiro

W4A 2019 ‑ 16th International Web for All Conference, San Francisco, USA, May, 2019

Best Technical Paper Nominee

The robot Baxter with his arms extended and a blind person feeling its hands

What My Eyes Can’t See, A Robot Can Show Me: Exploring the Collaboration Between Blind People and Robots

We presented a qualitative analysis of the expectations, fears and needs pointed by a sample of blind participants. In study 2, we implement and discuss the effect of two types of robotic assistance during the assembling task. Results from our two studies support the usefulness of developing and introducing this form of collaborative assistive technology in the lives of people with visual impairments. Positive outcomes for users (such as an increased level of autonomy in everyday life tasks) are outlined and discussed.

Mayara Bonani, Raquel Oliveira, Filipa Correia, André Rodrigues, Tiago Guerreiro, Ana Paiva

ASSETS 2018 ‑ ASSETS 2018 - 20th International ACM SIGACCESS Conference on Computers and Accessibility, Galway, Ireland, October, 2018

1.How to create a group in WhatApp. After we open WhatsApp. 2.We do the gesture until Chat. 3.Then again, left-right until New Chat. We enter. 4.Then left-right until groups. Then we go until the headline New Group. We enter. 5.Now we do the up down gesture to go to the last element of the page. Then without lifting the finger we double tap and stay on Next. 6.We are now in the edit box to write the subject to identify the group. We write the name and the subject and we click on the keyboard key to Submit. Now again, left-right until the Create button.

AidMe: Interactive Non-Visual Smartphone Tutorials

AidMe, is a system-wide authoring and playthrough tool of non-visual interactive tutorials. Tutorials are created via user demonstration and narration. In a user study with 11 blind participants we identified issues with instruction delivery and user guidance providing insights into the development of accessible interactive non-visual tutorials.

André Rodrigues, Leonardo Camacho, Hugo Nicolau, Kyle Montague, Tiago Guerreiro

MOBILECHI 2018 ‑ 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, Barcelona, Spain, September, 2018


Hybrid-Brailler : Combining Physical and Gestural Interaction for Mobile Braille Input and Editing

We present a smartphone case with physical buttons that allow users to write Braille in the back and gesture with the thumbs on the touchscreen. This enabled the study of novel editing approachs, very limited in commodity smartphones and accessibility services.

Daniel Trindade, André Rodrigues, Tiago Guerreiro, Hugo Nicolau

CHI 2018 ‑ ACM Conference on Human Factors in Computing Systems, Montreal, Canada, May, 2018

We propose a paradigm shift where interactions and contributions by knowledgeable users can assist others beyond what mobile applications and operating systems provide Interaction data collection methods are fragmented and are gathered by each app and operating system individually, with the purpose of self-improvement with limited control and awareness by the user.

Data Donors: Sharing Knowledge for Mobile Accessibility

Inspired by charitable donations, Data Donors, is a conceptual framework proposing the enablement of users with the capacity to help others to do so by donating their mobile interaction data and knowledge.

André Rodrigues, Kyle Montague, Tiago Guerreiro

CHI 2018 ‑ Late Breaking Work - Extended Abstracts of the ACM Conference on Human Factors in Computing Systems, Montreal, Canada, May, 2018

Patient performing a jigsaw puzzle.

Enabling Biographical Cognitive Stimulation for People with Dementia

In this paper we Introduce the initial development process of Scrapbook. After an initial study to understand current clinical practices, we developed a platform focused on enabling psychologists to perform reminiscence therapy with people with dementia. A two-week study was performed in a clinical environment.

Sérgio Alves, Andreia Cordeiro, Filipa Brito, Luís Carriço, Tiago Guerreiro

CHI 2018 ‑ Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal QC, Canada

static overlays

Improving smartphone accessibility with personalizable static overlays

We present an approach that superimposes a virtual overlay to all other interfaces ensuring interface consistency by re-structuring how content is accessed in every screen. The screen is splitted in two, dedicating half to a configurable set of static options regardless of context; while the other enables the standard content navigation gestures with the ability to re-order content and apply filters.

André Rodrigues, André Santos, Kyle Montague, Tiago Guerreiro

ASSETS 2017 ‑ 19th International ACM SIGACCESS Conference on Computers and Accessibility. Baltimore, Maryland, USA, October, 2017

-A) Volunteer web app. It shows two answered questions, one with a specific element of the interface highlighted. B) Hint Me! with the always available button on the top of the screen, and a notification showing the user he received an answer.

In-context Q&A to Support Blind People Using Smartphones

Hint Me! is a human-powered service that allows blind users to get in-app smartphone assistance. We conducted an exploratory user study with six blind participants to elicit their perceptions on the usefulness, and acceptance of human-powered networks for smartphone support.

André Rodrigues, Kyle Montague, Hugo Nicolau, João Guerreiro and Tiago Guerreiro

ASSETS 2017 ‑ 19th International ACM SIGACCESS Conference on Computers and Accessibility. Baltimore, Maryland, USA, October, 2017.

Average Six WPM After 12 Weeks. Figure  shows participants’ input speed over 12 weeks. Overall, the average input speed in the real world improved from week 1 (M = 3.2 SD = 0.8 WPM) to week 12 (M = 5.9 SD = 0.2 WPM). As in the laboratory, with all participants improving typing speed over time. Still, learning rates were lower in real-world data with an improvement of 0.2 WPM per week. Everyday Typing is Faster than Laboratory. In Figure, we notice that everyday typing speed is consistently higher than laboratory results. The difference in performance between real-world and laboratory is 1.6 WPM and 1.4 WPM in week 1 and week 8, respectively.

Investigating Laboratory and Everyday Typing Performance of Blind Users

For 12 weeks, we collected field data, coupled with eight weekly laboratory sessions. This article provides a thorough analysis of everyday typing data and its relationship with controlled laboratory assessments.

Hugo Nicolau, Kyle Montague, Tiago Guerreiro, André Rodrigues Vicki L. Hanson

TACCESS 2017 ‑ ACM Transactions on Accessible Computing (TACCESS) - Special Issue (Part 2) of Papers from ASSETS 2015


Four keyboard sizes side-by-side. With the Large being about the same size as the other three, with keys of 15mm. With Medium being bigger than the size of the remaining two, with keys of 10 mm. Small with keys of 5mm. Tiny with keys of size 2.5mm.

Effect of target size on non-visual text-entry

We investigate how nonvisual input performance, on touchscreens, varies with four QWERTY keyboard sizes (ranging from 15mm to 2.5mm). This paper presents an analysis of typing performance and touch behaviors discussing its implications.

André Rodrigues, Hugo Nicolau, Kyle Montague, Luís Carriço, Tiago Guerreiro

MobileHCI 2016 ‑ ‘16 Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, Pages 47-52, Florence, Italy — September 06 - 09, 2016

Honorable mention

A keyboard scheme with dots representing each of the collected touch points. Each key as dots from a different color. There is a concentration of dots on the most used keys (e.g. a, s and space).

Typing Performance of Blind Users: An Analysis of Touch Behaviors, Learning Effect, and In-Situ Usage

For eight weeks, we collected in-situ usage data and conducted weekly laboratory assessment sessions. This paper presents a thorough analysis of typing performance that goes beyond traditional aggregated measures of text-entry and reports on character-level errors and touch measures. Our findings show that users improve over time, even though it is at a slow rate (0.3 WPM per week).

Hugo Nicolau, Kyle Montague, André Rodrigues, Tiago Guerreiro, Vicki Hanson

ASSETS 2015 ‑ 17th International ACM SIGACCESS Conference on Computers and Accessibility. Lisboa, Portugal, October, 2015

A blind person interacting with a smartphone using headphones.

Getting Smartphones to Talkback: Understanding the Smartphone Adoption Process of Blind Users

We conducted a twelve week in-the-wild longitudinal study with five novice blind users to understand the adoption process of smartphones. We characterized their concerns, barriers, support mechanisms and evolution throughout the eight week period.

André Rodrigues, Kyle Montague, Hugo Nicolau, Tiago Guerreiro

ASSETS 2015 ‑ 17th International ACM SIGACCESS Conference on Computers and Accessibility. Lisboa, Portugal, October, 2015

HoliBraille, a smartphone case with six multi-point vibrotactile output. (a) Representation of ‘f’ using the Braille code: dots 1, 2, and 4. (b) The system outputs character ‘f’ through direct and localized feedback on the user’s fingers. (c) The system consists of six vibrotactile motors attached to springs and a 3D-printed case. The springs mold to users’ hands and dampen vibrations through the device, preventing propagation between fingers and allowing better stimuli discrimination.

HoliBraille: multipoint vibrotactile feedback on mobile devices

HoliBraille is a system capable of localized vibrotactile feedback that can be combined with the input capabilities of mobile devices. We used a custom-made case and off-the-shelf vibrotactile actuators combined with dampening materials. The solution can be attached to mainstream touchscreen devices enabling direct feedback on users’ fingers. In this paper, we contribute the following: 1) some application scenarios that can benefit from HoliBraille; 2) the design and technical description of the proposed device; and 3) an evaluation of HoliBraille on a foundational task for future Braille-related applications, i.e. character discrimination.# short description of the publication

Hugo Nicolau, Kyle Montague, Tiago Guerreiro, André Rodrigues and Vicki L Hanson

W4A 2015 ‑ In Proceedings of the 12th Web for All Conference (W4A ‘15). ACM, New York, NY, USA, Article 30, 4 pages.

The QWERTY keyboard an the characters spatial position in the 3d audio space. Characters are given an audio spatial position accordingly to their location on the keyboard. They are grouped according to the vertical columns of the keyboard (e.g. Q and A; then W, S and Z), resulting in 10 different spatial locations separated by 20º. For example A is heard on the far left (180º) while N is heard more on the right side (60º) allowing for simultaneous speech signals.

TabLets get physical: non-visual text entry on tablet devices

Our system combines spatial and simultaneous audio feedback with multitouch selection techniques to mimic traditional two-hand keyboard interaction. SpatialTouch enables blind users to rest their idle hand on a key (e.g. F or J), while simultaneously exploring the keyboard with their active hand and receiving auditory feedback about the character location.

João Guerreiro, André Rodrigues, Kyle Montague, Tiago Guerreiro, Hugo Nicolau, Daniel Gonçalves

CHI 2015 ‑ ACM Conference on Human Factors in Computing Systems, Seoul, South Korea, April, 2015


B# is a novel correction system for multitouch Braille input. (a) The user types the letter F holding the device with the screen facing forward using braille typing. (b) Character-level correction; the closest characters in terms of Braille distance for 2 unidentified chords. Example given with the O character, with closest characters, O,S,R,T (c) Word-level correction; top suggestions return by B# considering the letters that are at a Braille distance of one from the entered chord.

B#: chord-based correction for multitouch braille input

B#, a novel correction system for multitouch Braille input that uses chords as the atomic unit of information rather than characters. Experimental results on data collected from 11 blind people revealed that B# is effective in correcting errors at character-level, thus providing opportunities for instant corrections of unrecognized chords; and at word-level, where it outperforms a popular spellchecker by providing correct suggestions for 72% of incorrect words (against 38%).

Hugo Nicolau, Kyle Montague, Tiago Guerreiro, João Guerreiro and Vicki L. Hanson

CHI 2014 ‑ In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘14). ACM, New York, NY, USA, 1705-1708.

Best paper award

The UbiBraille prototype consists of six rings augmented with vibrotactile capabilities. The rings are worn on the index, middle, and ring fingers of both hands.

UbiBraille: designing and evaluating a vibrotactile Braille-reading device

UbiBraille consists of six vibrotactile actuators that are used to code a Braille cell and communicate single characters. The device is able to simultaneously actuate the users’ index, middle, and ring fingers of both hands, providing fast and mnemonic output. We conducted two user studies on UbiBraille to assess both character and word reading performance. Character recognition rates ranged from 54% to 100% and were highly character- and user-dependent.

Hugo Nicolau, João Guerreiro, Tiago Guerreiro and Luis Carríço

ASSETS 2013 ‑ In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘13). ACM, New York, NY, USA, , Article 23 , 8 pages.

BrailleType main screen on the left (visual representation of the six target zones was added for illustration). Middle screen shows the letter ‘r’ marked and ready to be accepted. The image on the right shows a user writing the letter ‘r’ with BrailleType.

BrailleType: Unleashing Braille over Touch Screen Mobile Phones

We present BrailleType, a single-touch text-entry system for touch screen devices. BrailleType allows the blind user to enter text as if he was writing Braille using the traditional 6-dot matrix code. We performed a user study with fifteen blind subjects, to assess this method’s performance against Apple’s VoiceOver approach. BrailleType although slower, was significantly easier and less error prone

João Oliveira, Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves

INTERACT 2011 ‑ INTERACT 2011: Human-Computer Interaction – INTERACT 2011 pp 100-107

(a) A mobile phone with a numpad, with directional arrows on top of the numpad, and on top of a joypad. (b) Explanation of a navigation scenario. Each jump goes to the next vowel a, e, i, o, u, when you enter a vowel , then you listen to the next letters on the alphabet until you find the desired target.

NavTap: a Long Term Study with Excluded Blind Users

In this paper, we describe a long-term evaluation of the NavTap prototype, a mobile device system for the blind with a navigation text-entry method as its main component. This evaluation was performed during 19 weeks with 5 users (and 3 extra users during an iterative design phase) and, besides uncontrolled (but logged) daily usage, featured regular controlled experiments to observe the users’ evolution (Figure 1). With these experiments we were able to collect data on mobile device performance usage, particularly text-entry, but also to observe how the improvements influenced the users’ habits and interactions.

Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, Daniel Gonçalves

ASSETS 2009 ‑ In The 11st International ACM SIGACCESS Conference on Computers and Accessibility. Pittsburgh, USA, October, 2009.