Join us

The Tech&People group is always on the lookout for enthusiastic candidate members with multidisciplinary backgrounds. In our work, we embed ourselves in the contexts we are designing for (formative studies1), we design and build web2, mobile3, wearable4, robotic5, gaming6, and tangible7 interactive systems, and we evaluate them with target users. If you join our group it is likely that you will work closely with end-users, collaborate with a multidisciplinary team (engineers, designers, data scientists, psychologists, clinicians), build novel and impactful interactive systems, eventually publish and present your work internationally, and see your project being used in real-life contexts.

Below, you can find the master thesis proposals that the group is offering for the year 2022/2023. Also, we present a set of stories of what members of the group did in their masters or are doing now, with the goal to provide some background to the proposals and an overall view of the opportunities you can find at Tech&People. Lastly, you can find a set of Frequently Asked Questions. If you want to know more, get in touch by e-mail, follow us on Twitter, or check our publications.


MSc Proposals 2022/2023


A man with a VR heaset angrily swinging a bat

Using Virtual Reality to Monitor the Health and Wellbeing of Children

Motivation

VR headets are emerging in the market. These tools offer a degree of immersion like no other technology. The reaction of users to environments can also be sensed in realtime. With hardware like Emteq Pro, it is possible to create environments that expose people to triggering environments and assess their physiological response. This creates an immense opportunity to collect physiological response data and sense the response of users to environments without putting the person at risk.

What you will do

In this thesis, the student will be challenged to create environments for exposure therapy of children with anxiety. These environments will be created in collaboration with therapists and incrementally expose children to their triggers (in a controlled environment), assessing response and enabling personalised therapy adaptation. The work will include developing environments in Unity, working with VR headsets and physiological sensors (e.g., Emteq pro), and collaborating with psychologists and children in creating and testing the environments.

Team

Tiago Guerreiro
João Guerreiro
Filipa Brito

The Oculus Quest headset

Biofeedback to Measure Behavioural Response during VR Cue-exposure for Diagnosis

Motivation

Virtual Reality allows for immersive experiences as users can embody their characters/avatars and perceive the environment from an egocentric (or first-person) perspective. This enables the creation of experiences that resemble real-life but in controlled environments, which has great potential for measuring behavioural response during exposure to situations that could otherwise be dangerous or cause distress. In this thesis, we aim to explore how realistic virtual environments can be used for diagnosis (for instance, of Alcohol Use Disorder), by using sensors to objectively assess behavioral response (e.g., hearth-rate), during exposure (in this case exposure to alcohol-related virtual environments).

What you will do

In this thesis you will be challenged to design, develop and evaluate novel solutions that explore the use of immersive VR environments that induce different emotional responses to users while they are being physiologically measured (e.g., using smartwatches, physiological kits, bracelets, etc). For example, in disorder related settings (e.g., exposure to a bar and drinks to assess alcohol disorders). You will conduct user studies early on to engage stakeholders in co-design sessions ensuring user engagement and representation.

Team

João Guerreiro
Tiago Guerreiro
Filipa Brito

One headset, showing the lenses from within

Exploring Multiple Perspectives in Virtual Reality

Motivation

An advantage of VR is that it enables greater immersion because users can embody their characters/avatars and perceive the environment from an egocentric (or first-person) perspective. While VR has been used for perspective-taking, this is done as a full experience where one is put in the shoes of others, usually as an empathy machine. We are interested in finding ways to experience different perspectives in the same experience (perhaps concurrently) using VR. The scenario we envision for blind people is motivated by the need to gain a greater understanding of the environment due to the absence of visual feedback. A way to provide more information to the blind user is to augment their auditory feedback channel with the actions of other characters (by looking at their perspective), helping them to understand what is happening in the scene. Ways to support these multiple perspectives could be - for instance - enabling seamlessly shifting between perspectives, or just to make the perspective of others more salient in the user’s experience (a very simple example is to convey the auditory feedback of footsteps to show that a person is moving in the scene, even at distances that would not be audible to the user, but that would be visible to a sighted user). Other potential contexts could be in VR experiences directed to caregivers, where they can be given multiple perspectives (including the ones of patients or doctors) in the same experience.

What you will do

In this thesis you will be challenged to design, develop and evaluate novel solutions to convey multiple perspectives in VR in a specific context (e.g., Healthcare or Accessibility for blind people). You will conduct user studies early on to engage participants in co-design sessions ensuring user engagement and representation. This work will conclude with a user study evaluating the developed set of navigation techniques.

Team

João Guerreiro
André Rodrigues

An older woman interacting with a Temi robot

Multimodal programming of robots for older adults

Motivation

Older adults are often limited in achieving the tasks they wish to do due to motor, sensorial, or cognitive decline. Technology, and particularly robots, are often created with stereotypical views of what older adults need: robots that try to promote exercise, phsyical or cognitive, are often deployed in care homes, as well as robots that connect older adults with their family, on the initiative of the latter.

What you will do

In this thesis, the student will design and develop multimodal scenarios where older adults control a Temi robot to achieve the tasks they wish. The student will explore different ways to enable control of the robot: voice, tangibles, or touch interfaces. Through these innovative interfaces, several scenarios will be developed: asking the robot to deliver something to a friend in a care home, controlling the robot remotely form the care home to enable communication with a grandson at home, among others. The work will be done in the context of a CMU Portugal project, in collaboration with CMU (USA), ISR (Tecnico, Portugal), and care homes in Lisbon and Setúbal.

Team

Tiago Guerreiro
Hugo Simão
Jodi Forlizzi
Alexandre Bernardino

Charts with classifications of on, off, and sleep stages

Monitoring Parkinson’s Disease: Detecting ON and OFF periods with Accelerometers

Motivation

People with Parkinson’s disease have several motor fluctuations during the day. There are normally associated with response to medication; when medication starts to wear off, symptoms start to be more visible. Detecting these changes is relevant to assess the state of the disease but also to allow clinicians to adapt medication plans.

What you will do

In this work, the student will resort to data from accelerometer sensors used by people with PD in their daily lives to assess motor fluctuations. The student will have access to datasets of people with PD with accelerometer data and an annotated ground truth. The objective will be to create a model that is able to predict motor fluctuations. The work will be done in collaboration with Campus Neurológico Sénior.

Team

Tiago Guerreiro
Diogo Branco
Joaquim Ferreira

One athlete using a VR headset.

Sports in Virtual Reality for Blind People

Motivation

Virtual reality is an emerging technology that is slowly becoming available to the masses at affordable prices. VR is currently used in a variety of contexts, including in sports both for recreational and training purposes. However, most VR applications (including sports related) are built with a major focus on visual information making them not accessible to blind users. In this thesis, we will investigate how to design accessible sports applications (e.g., table tennis, running, or other) for blind people, making use of VR technologies and auditory and haptic feedback.

What you will do

In this thesis you will be challenged to design, develop and evaluate a novel VR sports application for blind people. You will conduct user studies early on to engage participants in co-design sessions ensuring user engagement and representation. This work will conclude with a user study evaluating the developed set of navigation techniques.

Team

João Guerreiro
André Rodrigues

Interacting with a transparent touchscreen with a complex circuits image

Audializations: Non-visual Exploration of Complex Data Visualizations

Motivation

Information visualization tries to ease the understanding of complex data through the use of characteristics such as size, color, or location of elements. However, nost graphical representations of data - from simple charts to data visualizations - are not accessible to blind people. The most common solution is to provide a description of the chart, which does not enable a blind person to explore charts interactively nor to make their own interpretation of the data. In this work, we aim to explore information visualizations alongside different characteristics of audio (e.g., pitch, volume, or timbre) in order to enable blind users to explore - and perceive - complex data visualizations (or more broadly, representations) through audio.

What you will do

In this thesis, you will be challenged to design, develop and evaluate accessible visualizations that allow blind people to explore complex data. You will conduct user studies early on to engage stakeholders in co-design sessions ensuring user engagement and representation.

Team

João Guerreiro

Two people using a VR headset engaging in a virtual date.

Locomotion in Virtual Reality for Blind People

Motivation

Virtual reality is an emerging technology that is slowly becoming available to the masses at affordable prices. VR is currently used in a variety of contexts: gaming, education, shopping, social spaces, employee training, to name a few. As with any emerging technology, it is fundamental we ensure its accessibility among people with different abilities. One of the major challenges blind people face in virtual environments is to navigate/move in the virtual space. While prior work has focused on mimicking real-world techniques, such as a virtual white cane (due to user familiarity), in virtual reality there are many locomotion techniques that vary greatly from application to application (e.g. free teleportation, walk in place, analog stick, directional dashes, waypoint navigation). In addition, blind users in virtual environments will not have the same restrictions as in the real world, nor the restrictions sighted people have due to a lack VR sickness (similar to motion sickness due to visual stimuli). We argue that this combination provides an opportunity to explore novel/fantastical mobility methods that are not possible otherwise.

What you will do

In this thesis you will be challenged to design, develop, and evaluate novel navigation techniques in VR for blind people. You will conduct user studies early on to engage participants in co-design sessions ensuring user engagement and representation. This work will conclude with a user study evaluating the developed set of navigation techniques.

Team

João Guerreiro
André Rodrigues

Picture of Botmap, a system with a tabletop display showing a US map with Ozobots (small robots) on top

Orientation and Mobility Games for Blind People

Motivation

Orientation and Mobility (O&M) can be defined as a set of concepts, skills and techniques that enable people with vision impairments to travel an environment safely and independently. Orientation refers to people’s ability to position themselves in the environment, reflected in their awareness of where they are and where they want to go, while mobility refers to people’s ability to move independently from one place to another in a safe, effective and efficient manner. These two interlinked concepts play a very important role in the lives of people with vision impairments in general and are extremely important to children, as the ability to travel independently provides access to a wide range of activities that enable people to participate in society.

What you will do

In this thesis, we aim to investigate current methodologies for O&M training and to design, develop and evaluate novel technological solutions to improve its effectiveness and engagement. In particular, we aim to explore how games can improve engagement in learning O&M concepts, techniques, and skills. We see opportunities for technology to further support O&M training activities both during and after classes with O&M specialists/teachers. You will conduct user studies early on to engage O&M specialists and blind children in co-design sessions ensuring user engagement and representation. This work will conclude with a user study evaluating the technological solution developed.

Team

João Guerreiro

Two people with different visual abilities playing together and having fun

Shared Gameplay Loops

Motivation

Entertainment as a whole in modern society started to be recognized as a fundamental part of our lives and well-being. Gaming has a long list of potential benefits including coping with anxiety, social bonding, or as a creative outlet. While there is a vast array of options available for playing together, players are very limited in the experiences they are able to share when there is a significant difference in skill, ability, and gaming tastes among others.

What you will do

In our group, we have been exploring leveraging asymmetric roles to create asymmetric games that provide opportunities for shared play. In this topic, you will have the opportunity to delve deeper into the design of Asymmetric Games and explore how to create and expand shared gameplay loops to cater to different family members.

Team

André Rodrigues
João Guerreiro
Pedro Pais

A person holding a smartphone and pointing its camera to a building.

Data-Driven Navigation Assistance

Motivation

People with vision impairments are able to navigate independently by using their Orientation and Mobility skills and their travel aids - white cane or guide dog. However, navigating independently in unfamiliar and/or complex locations is still a main challenge and therefore blind people are often assisted by sighted people in such scenarios. While navigation technologies such as those based on GPS (e.g., Google Maps) can help, their accuracy is still too low (e.g., around 5 meters) to fully support blind users when navigating in unknown locations. On the other hand, indoor locations do not support GPS and generally do not have a navigation system installed.

What you will do

Our goal is to use data from the crowd (other people who walked the same areas before) to learn more about the environment and be able to provide additional instructions to blind users. A possible approach is to use smartphone sensors to estimate possible paths after an individual enters a building. While the GPS may inform us that users are entering/inside a building, large amounts of data (crowd-based) from smartphone sensors may give us information about possible paths and obstacles to instruct blind users. For instance, one may learn that after the entrance there is a path going left and after approximately 10 meters there are stairs to go up one floor; and another path that goes forward (and so on). While the data from a single user may be erroneous, we plan to use data from many users to make such an approach more robust.

Team

João Guerreiro
Joana Campos

Two cartoon people, one playing on a pc while the other is on a treadmill seeming to connect to the pc.

Games that Spill and are Fed by Real-World Interactions to Foster Family Bonding

Motivation

Digital gaming has the potential to prompt feelings of togetherness through a shared activity, based on challenging goals and immersive interaction. However, finding the time, or common interest within family members is often hard to impossible, limiting the opportunities for shared play.

What you will do

In this thesis, you will be challenged to create new opportunities for family members to affect each other’s routines, by designing games that spill and are augmented by real-world interactions. The goal is to explore how games can harvest everyday tasks and leisure activities from one family member as a contribution to the game of another, and vice-versa. For example, can a parent’s work productivity influence a child’s game, and, in turn, the points obtained in the game affect the parent’s leisure time (e.g., watching television)? What are the implications of these dynamics? This work will explore proof-of-concept scenarios and evaluate them with users. This work will be done in collaboration with researchers from KU Leuven, Belgium.

Team

André Rodrigues
Tiago Guerreiro
João Guerreiro
David Gonçalves

An image of the tactonom reader device, showing a working area with a canera on top

Learning through Audio-Tactile Exploration using Tactonom Reader

Motivation

Learning for people with visual impairments relies mostly on auditory and haptic feedback, but these modalities are often used in separate. For instance, technologies to support learning braille may rely on auditory feedback (for instance on a touchscreen), or on haptics (e.g., in a sheet of paper). The Tactonom Reader (https://www.tactonom.com/en/tactonomreader-2/) shows promise in combining the two feedback modalities to support better learning. For instance, one may explore a sheet of paper with their hand while learning braille, and receive auditory feedback about their exploration. Despite the potential of Tactonom Reader, litle is known about its potential to support learning activities, which we will explore in this thesis.

What you will do

In this project you will explore the potential of the Tactonom Reader to assist people with visual impairments in learning different competencies. You will be challenged to design, develop and evaluate novel applications that can leverage the Tactonom Reader. You will conduct user studies early on to engage stakeholders in co-design sessions ensuring user engagement and representation.

Team

João Guerreiro
Tiago Guerreiro
André Rodrigues

Amazon Alexa device

Community Created Voice Tech-Recipes

Motivation

Mobile devices are the “Swiss Army Knives” of today. They support a wide range of tasks, enabling people to access a wealth of information and services through their extensive connectivity; they have the potential to empower people in everyday tasks. Often we rely on our phones to follow all sorts of “recipes”, for example cooking, tutorial steps for a particular software, hardware configuration, assembly furniture and so on. While some of these are professionally created instructions, often we are dealing with authored content on blogs, cooking websites, online communities and even instructions written by colleagues, friends and family. For blind people, it is common for recipes (e.g. tech instructions) and other instructions to be passed along within the community and on a user to user basis. However, creating, disseminating and consuming this information strictly on a smartphone has its challenges. For example, following these recipes on a phone while focusing on the primary task causes additional challenges.

What you will do

In this thesis, you will be challenged to create an online platform to create “recipes” to be shared and consumed through smartphones and voice assistants such as Google Mini or Amazon Alexa.

Team

André Rodrigues
João Guerreiro

Email with phishing warning

Phishing prevention and response for blind people

This proposal is already assigned to a specific student.

Motivation

Phishing (pronounced: fishing) is an attack that attempts to steal your money, or your identity, by getting you to reveal personal information – such as credit card numbers, bank information, or passwords – on websites that pretend to be legitimate. Browsers, e-mails and app clients already try to detect phishing attacks and provide cues to the users. However, blind people miss the global perspective and visual cues provided in these warnings.They jump to content and eventually fail to be warned.

What you will do

In this work, the student will assess how blind people perceive current warnings and cues, and will design new ways of preventing phishing for blind people. The plan includes developing a browser extension that is able to communicate warning and alerts to blind people in a variety of relevants web applications.

Team

Tiago Guerreiro
João Janeiro


STORIES


Diogo Marques studied the phenomenon of social insider attacks to personal mobile devices, resorting to anonymity-preserving large-scale online studies, in his PhD. His work won awards at the most reputable usable security conference, and attracted intense media attention, with a highlight to a DailyShow sketch (from minute 3:05). During his PhD, he spent 4 months at IBM Research NY, doing an internship. He is now a user researcher at Google Munich.



Sérgio Alves performed his master thesis in 2017 when he developed Scrapbook, a web platform to enable agile biographic reminiscence therapy, which was used for more than a year in national clinical institutions and care homes (that he presented at the WebForAll conference in San Francisco). Before starting his PhD, he was hired as an engineer in the group to maintain and improve Scrapbook, and was then hired by the Openlab (UK) to maintain the collaboration with Tech&People. He is now researching citizen-led experimentation of user interfaces (advised by Tiago Guerreiro and co-advised by Kyle Montague, Openlab, UK).


scrapbook poster

André Rodrigues performed his master thesis in 2014 working on system-wide assistive technologies. In this work, besides a novel tablet two-handed text-entry method for blind people, he was able to provide a mobile phone access solution to Miguel, a tetraplegic blind young man. He presented his MSc work at BCS HCI in Southport, UK and at CHI (the best international conference in Human-Computer Interaction), in South Korea. After that, he pursued his PhD (LASIGE’s and DI’s Best PhD Student Award 2 times in a row), focusing on human-powered solutions for smartphone accessibility. From his many contributions, we outline TinyBlackBox, a Android mobile logger for user studies that enabled a longitudinal study with blind people, and has been integrated in the popular AWARE framework. During his PhD, he spent 3 months at Newcastle University, where we collaborated in the development of an IVR health system that is now being explored in a variety of contexts. He is now a PosDoc scholar in the group focusing on accessibility, gaming, and virtual reality.


tiny black box poster

Diogo Branco performed his master thesis in 2018 where he focused on extracting metrics from wrist-worn accelerometers and designing usable free-living reports for neurologists and patients with Parkinson’s Disease. The platform has been used since then for a period over 18 months (and going). He presented the outcomes of his MSc at CHI 2019 in Glasgow, and started his PhD in 2019. His work enabled ongoing service contracts with pharmaceuticals and the usage of wearable sensors in clinical trials.


Designing Free-Living Reports for Parkinson's Disease poster

Hugo Simão is an industrial designer doing his master thesis on robots to support people with dementia. In this process, he designed and developed MATY, a multisensorial robot that is able to project images, emit fragrances, sounds, and walk around autonomously. He published his preliminary work at CHI 2019, in Glasgow, and other parallel projects on robots for older adults (e.g., Carrier-pigeon robot at HRI 2020). Hugo is starting his PhD in the group in the next semester, and was accepted for a 6-month internship at Carnegie Mellon University (postponed due to Coronavirus).


Playing With Others poster

David Gonçalves is a current master student (started in September 2019) in the group working on asymmetric roles in mixed-ability gaming. Until now, he performed formative studies with blind gamers (and gaming partners), trying to understand their current gaming practices, and has developed his first mixed-ability game using Unity. He submitted his first full paper to an international conference (under review), and joined Tiago in a visit to Facebook Research London, where they were invited to present their work at the Facebook first Research Seminar Series.


Playing With Others poster


Ana Pires is a Postdoc scholar at Tech&People. She is a psychologist working on Human-Computer Interaction, with a particular focus on inclusive education. In her work, she explores how tangible interfaces improve how children learn, for example, mathematical or computational thinking concepts. In the group, she has been working with Tiago and master students to develop solutions for accessible programming, among many other projects.


Tangible math game for visually impaired children poster

Frequently Asked Questions


I am considering doing a thesis in the group but I am not sure I have the technical skills required. Am I required to know how to program wearables, develop games, know computer vision, or how to resort to machine learning algorithms?

We are a human-computer interaction research group with a strong technical component. Besides working with end-users, we build robust user-facing interactive systems to be evaluated in realistic settings. However, the group already has a set of skills that enables newcomers to have a swift onboarding experience. When needed, we provide internal workshops, pair master students with more experienced researchers, and provide constant support. The group also includes designers, engineers, and a psychologist, to provide support to projects that go beyond the expected knowledge of a student.

What are the future prospects for a student doing a master’s with your group?

User research and user interface development are of the most desired expertise in the market of today. If you search for job offers, you will consistently find user research positions among the most well paid. Our graduates, at the end of their masters, have only faced the challenge of selecting which offer they preferred. While some were eager to join the portuguese market (e.g., Feedzai, Farfetch), we also have ex-master students staying with us as engineers with competitive contracts, several joining us for a PhD, and some going abroad (e.g., Google, Newcastle University).

I want a scholarship. Do you offer those?

The group is currently engaged in four european projects, 2 national projects, and several service contracts. Most of our students are supported by scholarships in those contexts. All students have the opportunity to apply to the LASIGE scholarships; at the end of those scholarships, according to how the work is going, some students are offered a continuation of their financial support. Some projects are supported from the start, if they are developed in the context of a funded project task. From the stories above, all students have been fully supported during their studies. In our current team, more than 50% of the students are supported by scholarships.

What’s the deal with conference travels and internships? Do I have to pay for those?

No. Conference publications are fully supported by the group (travel, hotel, registration, per diem) when submission to the conference is agreed with your advisor. Internships are normally arranged with your advisors and the visited institution, and are also fully covered by funding schemes (e.g., Erasmus+) or by the visited institution/company.

I want to work in the X project but I will need Y technology and a PC with special hardware. Do you have those?

If a proposal requires specific tech/materials, it will be provided, including computers/laptops to work with them (if needed). The group already has access to a wide set of resources like eye-tracker, mobile devices, physiological computing kits, IoT kits, commodity robots, bluetooth beacons, VR headsets, as well as cloud computing, panel, and transcription services.

How can I know more about the group?

You can check our publications, team and research pages, to have an idea of our published work and mission. We also invite you to check our lab memo to have a feeling of how the group operates. Follow us on Twitter. And, for any other business or more detail, come talk to us.

What do you expect from your students?

The group welcomes motivated strong-willed students to pursue impactful work. While it is expected that getting a position will be competitive in regards to a student’s background, we are mostly looking for people that are motivated to work and to make a difference. If you just want to finish your masters in the easiest way possible, this is not the group you should be applying for. Conversely, if you are eager to learn a variety of skills, be challenged, be exposed to real life problems, and seek for fair and effective solutions to solve them, come work with us.