: Event-based visual effects for more expressive, Joyful remote conversations

Face-Tracking Video Chat Feature Exploration

2025 academic project

My role

painpoints

Product Strategy 1

Product Strategy 2

Brief Context

Team

Tool

Academic

Advisor

UX Design & Technical Development

Even with partners or close friends, not everyone feels comfortable showing their raw face on video, even when they want a stronger sense of connection.

This feature study explores how face-tracking, emoji-based characters can support expressive and playful communication while preserving a sense of emotional safety. By mapping real-time facial expressions to animated characters and event-based visual effects, the interaction allows users to share emotions without fully exposing themselves.

The goal was to create a space where people feel comfortable, while still connecting through active, emotionally rich interactions, focusing on the essence of video chat rather than realistic self-presentation.

Concept Modeling (100%)

User Interview (100%)

Technical development (100%)

Character / Emoji Design (100%)

Interface Design (100%)

Interaction Design (100%)

UX Strategy (100%)

Solo Project

Figma, Javascript, P5js

Daniel Lefcourt @ RISD

In this self-initiated two-week project, I owned full process from defining the concept and UX strategy to building interactive Figma prototyping, and implement core interaction logic through hands-on coding

Duration

Dec 2025 (2 weeks)

How might we create joyful moments of connection through emotion-based interactions, while maintaining a sense of safety?

Emoji tracking my real-time facial movements

Event #1 - Surprise

Event #2 - Wink

Event #3 - Frown

Event #4 - Laugh

“I want to show my face and talk to my boyfriend on video calls,

but not quite comfortable yet. I end up showing only half of my face,

and it sometimes makes the call feel less immersive.”

“I think video chat could be so much fun, but among guys,

we don’t usually sit there looking directly at each other’s faces. Suggesting that kind of interaction just feels awkward.”

Interviewee 1

Interviewee 2

Amplifying emotional signals by turning facial expressions into event-based effects

Capture / record playful moments in the shared gallery in the chat, making them intentionally shareable

Facial expressions activate event-based visual effects, amplifying emotional moments during conversation. This transforms small, fleeting expressions into shared visual experiences, making remote interactions feel more expressive and emotionally connected.

Playful moments naturally emerge during emoji-based conversations.
Instead of being captured privately or after the fact, this design allows moments to be recorded directly within the chat, making archiving an intentional, visible part of the interaction and turning those moments into shared memories that also can be revisited later.

Designed an emoji-based interaction system

that tracks real facial movements

This system focuses on facial expressions themselves, rather than appearance,

not how we look, not whether we’re wearing makeup, and not how camera-ready we feel.

By translating real-time facial movements into an emoji-based character, it reduces the emotional pressure of being on camera, while still allowing users to express genuine emotions in a playful way.

Host opened chat

Screenshot

Recording

Default

Recording

Open bottom sheet to invite friend

Shared Gallery

Capture

Record

Chat history

Frequently getting inaccurate classification

Definitely not a happy face..!

Clear effects application through event-based facial detection

9:41

Host

Empty

You’re here alone

Invite

Search friend’s name

A

B

Online

Away

Away

Online

Online

Adam Magie

Ava Gupta

Antonie Cargay

Benson Boon

Brian Park

Invite

Invite

Invite

Invite

Invite

Friends List

abc

Empty

00:03

Chat duration

Guest name shown up

Autoplay thumbnail

Captured screenshot

with delete option

Button extension interaction when ‘more‘ icon clicked

Guest and Host in chat

Archived screenshot images / recorded GIF in shared gallery

Unfold all of the hidden screenshots / recordings

#1 Iteration

#2 Iteration

Takeaways

Pivoting from Emotion-based face detection

to Event-based face detection

Redefining chat history from a data archive into a space for emotional recall

Emotion-based face detection relied on discrete emotion labels, but in practice these classifications were often inaccurate and unstable due to the continuous nature of facial expression changes. Recognizing that interaction breaks down at ambiguous emotional boundaries, I pivoted to an event-based face detection approach that responds to expression changes rather than fixed emotion types.

The initial approach relied on users to actively interpret dense visual data, which proved cognitively demanding and emotionally distant. By introducing visual emphasis and interactive cues, the redesign enables moments to resurface more naturally and intuitively.

My initial approach relied on an emotion-detection API to classify real-time facial expressions into seven discrete emotion categories. However, through implementation and testing, I found that these classifications were often inaccurate and overly rigid—facial keypoints alone struggled to reliably represent nuanced emotional states.

More importantly, emotions shift continuously rather than discretely. As expressions transition from one state to another, the problem was the boundaries between emotion categories become unstable and ambiguous.

By prioritizing visual density, the initial design assumed users would actively derive meaning from stacked data. However, this expectation introduced unnecessary cognitive effort and weakened the experience of recalling meaningful moments.

This iteration uses visual emphasis and autoplay as interaction cues, shifting chat history from passive browsing to active, emotionally driven engagement.

From a user’s perspective, the event-based facial detection model felt more intuitive and predictable. Because each visual effect was triggered by a clear, observable facial action such as opening the mouth beyond a certain threshold, users could easily understand why a specific effect appeared. This clarity reduced confusion during interaction and allowed users to focus on expressing themselves, rather than trying to interpret how the system was reading their emotions.

const BROW_THRESHOLD = 0.35;

// 35% eyebrow raise
const MOUTH_THRESHOLD = 0.15;

// 15% mouth opening

#1 Event - Surprise

#3 Event - Wink

#2 Event - Frown

#4 Event - Laugh

const CLOSED_THRESHOLD = 0.30; 

// one eye below this = closed

const OPEN_THRESHOLD = 0.42; 

// other eye above this = open

const FROWN_THRESHOLD = -0.20; 

// downward mouth curve (negative = frown)

const EYE_SQUINT_MAX = 0.65; 

// both eyes squinty

const SMILE_CURVE_MIN = 0.40; 

// strong upward mouth curve

const MOUTH_OPEN_MIN = 0.20; 

// mouth slightly open

const BROW_MAX = 0.18; 

// brows not too high (filters surprise)

Design Iteration

Evaluating Interaction Choices

Through Emotional Experience

To validate key design decisions, I explored two major iterations across four factors related to emotional expression and recall. Using Figma prototypes, I compared interaction models through qualitative feedback sessions, focusing on clarity, responsiveness, and emotional engagement.

Emotion-based face detection

#1 Iteration - Facial Expression Detection Model

#2 Iteration - Chat History Design for Emotional Recall

Data-dense chat history

Event-based face dection

Emotion-forward chat history

A

C

B

D

Chat with Mindy

Duration 13:02

12/28

9:41

Chat with Mindy

Chat with Daniel

Chat with Clara

Chat with Shelly

Chat with Jinu

Chat History

Most recent

Duration 13:02

Duration 6:22

Duration 28:16

Duration 13:06

Duration 13:06

12/28

12/14

11/2

11/1

11/1

9:41

Chat with Mindy

Chat History

Duration 13:02

Most recent

Chat with Daniel

Duration 6:22

12/14

Chat with Clara

Duration 6:22

12/10

+3

00:15

00:04

9:41

Chat with Mindy

12/28

Chat History

Duration 13:02

Most recent

Chat with Daniel

Duration 6:22

12/14

Chat with Clara

Duration 6:22

12/10

+3

00:15

00:04

Initial Approach:

Emotion-based face detection

Pivoted Approach:

Event-based face detection

Initial Approach:

Chat history as a log

Pivoted Approach:

Chat history as emotional recall

When I first started thinking about playful chatting interactions, I was focused on finding something entirely new that would feel fun. But when I looked more closely at what people actually do in moments of fun and the habits we already have. I realized the answer was already there.

In this project, that behavior was the instinct to capture and save playful moments. People naturally take screenshots or try to record funny interactions. Through this process, I learned that meaningful UX doesn’t always come from inventing brand-new features, but from recognizing unconscious needs within existing behaviors and turning them into intentional design solutions.

2025 by Ellie Na