HOUSE is a multidisciplinary design studio. My name is Jenny Rodenhouse, I am an interaction designer and educator. I make playable graphics using 3D game engines/physics engines to produce interactive software, videos, research, models, graphics, exhibitions, installations, courses, labs, and curriculum.  I am interested in the semiotics of action and use design to diversify concepts, expressions, and representations within computation. My projects are independent, collaborative, and client-based.

Associate Chair of Bachelor of Science BSc Undergraduate Interaction Design at ArtCenter College of Design. (1) Faculty Director of the Immersion Lab for 7 yrs, increasing student access to technology. Has taught for 9 yrs as Associate Professor for Undergraduate Interaction Design and Graduate Media Design Practices at ArtCenter. Faculty at Southern California Institute of Architecture (SCI-Arc).

Graduate of Syracuse University’s 5 year Industrial and Interaction Design program (BID) and received her MFA in Media Design from ArtCenter College of Design.
Previous interaction designer at Microsoft Research, Xbox Entertainment and Devices, and Windows Phone Advanced Development.  Worked on the first Windows Phone platform, explored the future of transmedia entertainment, prototyped emerging gestural interactions, designed and shipped the Xbox 2011 interface, created NFL fantasy football on Xbox, and explored cross-platform social experiences for Microsoft Research and Xbox Live.


Publications: (1) Manual Dexterity: An Exploration of Simultaneous Pen + Touch Direct Input, CHI 2010: I Need Your Input. (2) Pen + Touch = New Tools, ACM Symposium on User Interface Software and Technology, CHI Alt.chi Paper. (3) Mixsourcing: Exploring Bounded Creativity as a Form of Crowdsourcing, Published by ACM Conference on Human Factors in Computing Systems.
Jury Chair for Core77 Design Awards, a Fellow at Nature, Art, & Habitat Residency in Sottochiesa Italy, and a Postgraduate Research Fellow at Media Design Practices ArtCenter College of Design in Pasadena California.

Work shown at art, architecture, and design events including — Netflix’s series The Future of ep. Gaming, LA’s Architecture and Design Museum, Venice Biennale of Architecture, Dutch Design Week, and Die Digitale, DDDD, Spring Break Art Show, FEMMEBIT, Navel, Roger’s Office Gallery, IxDA 2019, The Swiss Architecture Museum; Architektur Galerie Berlin; BODY and the Anthropocene; the Bi-City Biennale of Urbanism / Architecture; Architecture + Design Museum; Open City Art City Festival at Yerba Buena Center for the Arts; Post-Internet Cities Conference; The Graduate Center for Critical Studies; KAM Workshops: Artificial Natures; and CHI. Her projects have been featured in Wallpaper, The Guardian, Wired Magazine, Anti-Utopias, Test Plots Magazine.

Contact: house@jennyrodenhouse.com

Associate Chair IxD BSc  

ArtCenter College of Design, Pasadena, CA, 2023

Consulting w/ Microsoft  

Exploring the future of AI and mixed reality

Dummy
2024, Voice User Interface, Unity, Video, Interview with Crazy Minnow Studio, 6:05

How do AI-driven lip synchronization systems translate and visually perform human speech, and what does this reveal about the ways software “speaks” or represents us in digital environments such as video games?

Ventriloquy is the art of making one’s voice appear to come from somewhere else. Dummy is a voice user interface that explores the mouth as a visual language—one we read, interpret, and have programmed to behave in specific ways. Using lip-synchronization software, the interface employs audio amplitude and phoneme detection to approximate mouth shapes known as visemes—the visual counterparts of phonemes, or units of sound in speech. By intentionally separating sound from shape, the simulation introduces misalignments that generate new facial expressions, meanings, and modes of communication.

The video uses audio from an interview with Crazy Minnow Studio, creators of SALSA Lip-Sync—an animation tool used to puppeteer character mouths in video games and 3D simulations. Through this process, the project explores how software ventriloquizes human voice, translating it into computational gestures that both mimic and distort human expression.

The work situates itself within contemporary discourse on AI-mediated communication and the automation of human expression. As lip-sync and generative AI systems increasingly perform speech in gaming, virtual production, and social media, questions of authorship, representation, and agency arise: Who is really speaking when a digital mouth moves? What assumptions about language, emotion, and identity are embedded in these algorithmic performances?

The project reveals that communication technologies, particularly AI-driven interfaces, function as interpretive systems, not transparent channels. By separating phonemes from visemes, Dummy exposes speech as a site of translation, where software interprets and performs voice through computational models of expression. These distortions make visible the cultural and aesthetic assumptions embedded in machine-mediated speech: notions of intelligibility, emotion, and even gendered expressiveness. Ultimately, when software “speaks” on our behalf, it does more than represent us, it redefines what counts as expression, agency, and presence in digital environments.

Thank you to Crazy Minnow Studio  


 FINAL VIDEO
Video excerpt
Video excerpt
Video excerpt
Unity prototype
Unity simulation - waiting mode when no audio is present
Still from Unity simulation

Dent
2024, 3D Interface, Typeface Generator 


A playable type generator created in Unity. Throw balls at the letter form to create creases and folds, deforming the meaning.

FINAL 3D Interface, Unity
Letter pile
Letterforms generated from a play session
Generated letterforms

Everything as Input  2022, Graduate Studio Course, Meta Reality Labs, Media Design Practices, Immersion Lab, ArtCenter College of Design, Pasadena, CA, 


Teaching Team — Jenny Rodenhouse, Ben Hooker, & John Brumley    TA — Alan Amaya    Meta Reality Labs, University Collab Program — Michael Ishigaki, Roger Ibars, Aaron Faucher, Ata Dogan
Collaborated with faculty Ben Hooker and Meta Reality Labs to create a graduate studio that critically interrogated the future of augmented reality interfaces. Computer vision is a computation perspective trained to identify and interpret our environment through pattern-making, sensing, and tracking. This algorithmic point of view has transformed our visual field into new forms of machine sensing and controlling, turning everything within its field of view into an input that designers need to learn to create for and interact with. Together with Meta Reality Labs, we examined the applications and implications of augmented reality and computer vision around these 3 subjects: Perception, Privacy, and Power. Students trained custom computer vision models and researched the semiotics of datasets. 

Talking to Objects  2024 - Ongoing, Dataset, Microscope, Video with Lambda Vue 


How do computer vision systems perceive and reconstruct speech from material vibrations, and what does this reveal about the ways AI translates between physical motion, sound, and language? When machines perceive our voices through matter, what new forms of communication and miscommunication emerge?

Computer vision models can recover speech from the subtle vibrations of objects. Using Lambda Vue, a software that amplifies minute motions in video, and a microscope, Talking to Objects captures microscopic voice markers from material vibrations. In this case study, tinfoyl (a phonetic writing of tinfoil) is spoken to a sheet of foil, generating a visual dataset that maps graphemes (letters) to phonemes (sounds).

Talking to Objects examines AI-mediated perception and computational sensing, exploring how machines “see,” “hear,” and “understand” the world through data. Computer vision systems capable of recovering speech from visual motion signal a shift in how communication is conceived: language becomes a physical event, and matter itself becomes a communicative surface. It asks what kind of “listening” occurs when AI detects voice through the movement of materials, and how such systems extend or displace human sensory perception.

By observing how computer vision detects and amplifies microscopic vibrations, the project found that machines treat physical motion as a kind of language—translating the invisible resonance of voice into visual data. These transformations expose the interpretive and speculative nature of machine sensing: AI does not simply capture speech but infers and imagines it, piecing together traces of motion and pattern to produce a voice.

In this way, technology becomes both translator and storyteller, reframing communication as a negotiation between signal, noise, and imagination. Technologies of perception do not just extend human senses, they invent new ways of speaking altogether.

With Caro Trigo, Mavis Yue Cao, Christie Wu

Video that visualizing the vibrations of speech. Saying tinfoil over tinfoil. Before and after Lamda Vue software processing
Tinfoil grid to define spoken phenomes and visual letters.
Separating each frame and phenome
TYPE SPECIMEN - Tinfoil
‘T’ frames
Identifying ‘T’ sound markers and characteristics in the material
Selecting high contrast markers for each phenome
Identifying sound markers and characteristics in the material
Sound markers
Microscope setup
Microscope setup in Immersion Lab
Video that visualizing the vibrations of speech on a strawberry.  

Player Non Player 2024, Seminar Course, History and Theory, Southern California Institute of Architecture (SCI-Arc), Los Angeles, CA,


Teaching Team — Alice Bucknell & Jenny Rodenhouse
Created a graduate History and Theory seminar in collaboration with Alice Bucknell, for Southern California Institute of Architecture. The seminar explored the game engine as a perceptual platform that turns the world into an interface where everything within its field of view is playable. Toggling across histories and philosophies of interaction design and their simulated architectures, Player Non Player roams game systems, semiotics, and the ever-dissolving boundary between player and NPC. Across reading presentations, discussions, game jams, and writing exercises, students examined the tactical possibilities and semiotic strangeness of game engine interfaces. The seminar supplements theory with practice, where students experimented with writing their own game worlds and interactions through a selected verb as new game mechanic.

Parade Town 2021, 3D Animation and Video, 0:42, ArtCenter College of Design, DTLA, Los Angeles, CA

The project investigates the history of parades as promotions of unrecognized communities and declarations of desired power. Using augmented reality and computer vision, the projects celebrate the seen/unseen activity and expressions of downtown Los Angeles.

Taught students to create augmented reality lenses using custom computer vision models. Students used machine learning and data training as a visual anthropology study of the city. The course culminated in a public exhibition in downtown Los Angeles. Visitors used QR codes to launch individual student projects and trigger digital overlays that called attention to objects, scenes, or urban conditions selected by the student.

Designed 3D motion graphics and coded curtains to promote student exhibition and course Parade Town: A procession of augmented realities in DTLA.  Motion graphics produced in Unity, a game development engine, to simulate the behavior of augmented reality, hiding and revealing the title of the exhibition. 

Designed curtains with QR codes as a checkered textile pattern. Codes hosted augmented reality exhibition, launching individual AR projects during gallery pandemic closure. 

Work by — Alan Amaya, Jeremy Yijie Chen, Shiyi Chen, Dunstan Christopher, Elizabeth Costa, Noah Curtis, Cha Gao, Jingwei Gu, Sean Jiaxing Guo, Kate Ladenheim, Miaoqiong Huang, Blake Shae Kos, Jeung Soo Lee, Hongming Li, Tingyi Li, Fuyao Liu, Guowei Lyu, Yiran Mao, Elaine Purnama, Mario Santanilla, Qi Tan, Lucas Thin, Zeyu Wang, Zhiyan Wang, Zoey Wang, Christie Wu, Yue Xi, Haoran Xu, Qianyue Yuwen, Fanxuan Zhu

Exhibition and Teaching Team — John Brunley, Ben Hooker, Jenny Rodenhouse, & Christina Valentine

 FINAL EXHIBITION GRAPHICS Song credits: Hollywood Freaks, Beck
FINAL EXHIBITION Videos split across two screens, left and right windows of gallery
FINAL CODED CURTAINS Scanning QR codes to launch and view augmented reality projects
DTLA Opening
Simulated parade of pedestrians hosting title and letterforms.
Type motion created from dancing motion capture
Video still
Video still
Coded curtains detail
Video still
Curtain  mockup created in Unity
Video still
Exhibition project: Resting Spaces,  Yiran Mao
QR code pattern study
Exhibition project: Coffee Shop Scooter: The Portable Gentrifier, Blake Shae Kos
Curtain pattern studies
Parade of Carrying by Yining Gao
Exhibition project: Custom dataset of ‘unlucky’ signifiers in DTLA for Parade Your Luck by Hongming Li  
Cruising dataset,  Alan Amaya

Course — IxP1: Intro to Prototyping 2024, Studio Course, Interaction Design, Immersion Lab, ArtCenter College of Design, Pasadena, CA

Redesigned our first term introduction to prototyping course, IxP1 introduced students to designing interactions through code. Drawing parallels between visual design and programming, the studio taught students the foundations of design and human-computer communication to create a visual system of words, letters, figures, shapes, or other symbols that convey meaning to both humans and machines. In this course, students learned to design through iterative prototyping to develop their skills in creating live, time-based interfaces.  The studio presented a series of programming languages and tools (p5.js, Protopie, and Unity). Students designed a series of actions and graphical control systems.

HOUSE — Jenny Rodenhouse

Designer — Educator
[id="Q1004802321"] bodycopy { } [id="Q1004802321"].page { justify-content: center; background-color: #ffffff; } .overlay-content:has([id="Q1004802321"]) { } [id="Q1004802321"] .page-content { border-radius: 0rem; padding-top: 1.7rem; padding-bottom: 1.7rem; background-color: #ffffff; } [id="Q1004802321"] .page-layout { align-items: flex-end; }