top of page


This is my M1.1 design project, where together with teammates, I designed a system using AI features that facilitates communication between presenter and audience in order to provide feedback. The usage scenario surrounds a lecture setting of 50-100 students and a single speaker, considering different experience levels for the presenter.






Product System Design

Expertise Areas

T&R, MD&C, U&S

My Contributions

User study, development of low-fi presenter prototype, AI realization

Iteration 1

We chose to design for the context of presentations and used bodystorming and preliminary interviews to find opportunities for design to improve presentations. We concluded that the best way to improve attention is to improve the presentation skill of a presenter. Based on the gathered insights we established a design that allows the presenter to gain feedback in key moments of the presenting experience; both during and after a presentation. It includes a device for listeners to input feedback, a device that expresses feedback, and a reflective interface.

Iteration 2

The concept was iterated upon and an user test was performed and analyzed. From the results, we made needs & requirements for both the audience members and the presenter. They were used as argumentation for future decisions. However, the needs of the audience and presenter often conflicted, these conflicts were turned into design challenges to be solved in the next iteration. An example would be the feeling of control. The audience wanted to gain more control while giving feedback to presenters, while the presenters wanted to remain in control of their own presentation.

Iteration 3

For the final iteration, we focused on facilitating discussion to supplement additional interaction between audiences and presenters. We also provided modularity to target different experience levels of presenters. We implemented the audience device, presenter device, and the AI into our final prototypes, and built up the data connection between them via OOCSI and data foundry. We can see from the concept flow (AI scenario) that AI is incorporated through the entire process, and what we’ve achieved so far touches the values that we set up before.


The AI realization is written in python and includes several external libraries.

Using the library sounddevice (Play and Record Sound with Python — Python-Sounddevice, 2019), the program records the speech with timescales of 60 seconds in a while loop controlled by presentation status (boolean). As the presentation ends, the program then goes through all the sound files and generates the datasets with the required AI features using my-voice-analysis. After that, the datasets are sent to the data foundry and are matched with the audience inputs through certain timestamps.


The program downloads the file from the data foundry and analyses the datasets. The speech features are defined as the features for learning while the engagement level is the target. With a learning model based on such, the program could rank the importance of the features and transmit the results to the discussion and reflection parts by means of OOCSI.


What I find valuable to me is the embodiment of data in users’ feedback. When listening to and transcribing the users’ feedback during the interviews, it occurred to me that frequently the users are unconsciously referring to certain types of data in an explicit or implicit way. Previously I saw data as the extreme of technology, but currently, I can imagine how it connects users and their needs with the cold and rigid advanced technologies.

Besides, the whole project has little to do with emerging interaction technologies, which is part of my interest in professional identity. Before this semester I was interested in the works that incorporate AI to create playful interactions, and chose to make an exploration by chosing this squad. After going through this project, I have found that it is the playful elements implemented by advanced technologies that actually interested me instead of AI itself. Therefore, I will dive more into that area in the future, but what I learned from an AI perspective (conversations, data processing, etc.) can definitely provide me with new ways of thinking about entertainment.

bottom of page