HOUSE is a multidisciplinary design studio. My name is Jenny Rodenhouse, I am an designer and educator. I make interactions using 3D game engines that produce software, videos, research, models, graphics, images, exhibitions, installations, courses, labs, and curriculum. I am interested in using design to diversify concepts, expressions, and representations within computation. My projects are independent, collaborative, and client based.  

Associate Chair of Bachelor of Science BSc Undergraduate Interaction Design at ArtCenter College of Design. (1) Faculty Director of the Immersion Lab for 7 yrs, increasing student access to technology. Has taught for 9 yrs as Associate Professor for Undergraduate Interaction Design and Graduate Media Design Practices at ArtCenter. Faculty at Southern California Institute of Architecture (SCI-Arc).

Graduate of Syracuse University’s 5 year Industrial and Interaction Design program (BID) and received her MFA in Media Design from ArtCenter College of Design.
Previous interaction designer at Microsoft Research, Xbox Entertainment and Devices, and Windows Phone Advanced Development.  Worked on the first Windows Phone platform, explored the future of transmedia entertainment, prototyped emerging gestural interactions, designed and shipped the Xbox 2011 interface, created NFL fantasy football on Xbox, and explored cross-platform social experiences for Microsoft Research and Xbox Live.


Publications: (1) Manual Dexterity: An Exploration of Simultaneous Pen + Touch Direct Input, CHI 2010: I Need Your Input. (2) Pen + Touch = New Tools, ACM Symposium on User Interface Software and Technology, CHI Alt.chi Paper. (3) Mixsourcing: Exploring Bounded Creativity as a Form of Crowdsourcing, Published by ACM Conference on Human Factors in Computing Systems.
Jury Chair for Core77 Design Awards, a Fellow at Nature, Art, & Habitat Residency in Sottochiesa Italy, and a Postgraduate Research Fellow at Media Design Practices ArtCenter College of Design in Pasadena California.

Work shown at art, architecture, and design events including — Netflix’s series The Future of ep. Gaming, LA’s Architecture and Design Museum, Venice Biennale of Architecture, Dutch Design Week, and Die Digitale, DDDD, Spring Break Art Show, FEMMEBIT, Navel, Roger’s Office Gallery, IxDA 2019, The Swiss Architecture Museum; Architektur Galerie Berlin; BODY and the Anthropocene; the Bi-City Biennale of Urbanism / Architecture; Architecture + Design Museum; Open City Art City Festival at Yerba Buena Center for the Arts; Post-Internet Cities Conference; The Graduate Center for Critical Studies; KAM Workshops: Artificial Natures; and CHI. Her projects have been featured in Wallpaper, The Guardian, Wired Magazine, Anti-Utopias, Test Plots Magazine.

VUI, Interview, & Video — Lip Reading Unity, 6:05, 2024

Designed a voice user interface to explore the mouth as a visual language; a shape that we read and have expectations for how it should behave. 

Using lip synchronization software, the interface uses audio amplitude and phonemes to approximate a mouth shape known as a viseme: a visual equivalent of a phoneme or unit of sound in spoken language, creating a general facial image that can be used to describe a particular sound.

Separating the sound and shape, the simulation uses viseme misalignments to generate elaborate facial shapes, expressions, and new meaning. 

The video documentation uses audio from an interview with Crazy Minnow Studio, creators of SALSA Lip-Sync an animation tool used to puppeteer character mouths in video games.
Thank you to Crazy Minnow Studio

FINAL VIDEO
Video excerpt
Video excerpt
Video excerpt
Unity prototype
Video excerpt
Keith Waters and Thomas M. Levergood, viseme table of computer generate mouth shapes associated with phonemic characters (1996), ‘DECface: an automatic lip-synchronization algorithm for synthetic faces’. Cambridge Research Laboratory Technical Report Series
Unity simulation - waiting mode when no audio is present
Still from Unity simulation

Curriculum — Associate Chair IxD BSc  

ArtCenter College of Design, Pasadena, CA, 2023

Course — Everything as Input  Studio, Meta Reality Labs, Graduate Media Design Practices, Immersion Lab, ArtCenter College of Design, Pasadena, CA, 2023

Collaborated with faculty Ben Hooker and Meta Reality Labs to create a graduate studio that critically interrogated the future of augmented reality interfaces. Computer vision is a computation perspective trained to identify and interpret our environment through pattern-making, sensing, and tracking. This algorithmic point of view is transforming our visual field into new forms of machine sensing and controlling, turning everything within its field of view into an input that designers need to learn to create for and interact with. Together with Meta Reality Labs, we will examine the applications and implications of augmented reality and computer vision around these 3 subjects: Perception, Privacy, and Power. Students trained custom computer vision models and researched the semiotics of datasets. 

Teaching Team — Jenny Rodenhouse, Ben Hooker, & John Brumley    TA — Alan Amaya    Meta Reality Labs, University Collab Program — Michael Ishigaki, Roger Ibars, Aaron Faucher, Ata Dogan

Type Specimens — Tinfoyl Lambda Vue, 2024

Computer vision models can recovers speech from the vibrations off objects. Using Lambda Vue, a software that amplifies small motions in video, the project creates type specimens from object vibrations. 

Placing different materials under a microscope we capture and translated speech data from this visual information. Here we said tinfoil over tinfoil and extracted visual markers for each sound and letter.  

In collaboration with Caroline Trigo, Mavis Yue Cao, Christie Wu

TYPE SPECIMEN - Tinfoyl
Video that visualizing the vibrations of speech. Saying tinfoil over tinfoil. Before and after Lamda Vue software processing
Tinfoil grid to define spoken phenomes and visual letters.
Separating each frame and phenome
‘T’ frames
Identifying ‘T’ sound markers and characteristics in the material
Selecting high contrast markers for each phenome
Identifying sound markers and characteristics in the material
Sound markers
Microscope setup
Microscope setup in Immersion Lab

Course — Player Non Player Seminar, History and Theory, Southern California Institute of Architecture (SCI-Arc), Los Angeles, CA, 2024


Teaching Team — Alice Bucknell & Jenny Rodenhouse
Created a graduate History and Theory seminar in collaboration with Alice Bucknell, for Southern California Institute of Architecture. The seminar explored the game engine as a perceptual platform that turns the world into an interface where everything within its field of view is playable. Toggling across histories and philosophies of interaction design and their simulated architectures, Player Non Player roams game systems, semiotics, and the ever-dissolving boundary between player and NPC, rigging the camera of gamespace into infinity. Across reading presentations, discussions, game jams, and writing exercises, students examined the tactical possibilities and semiotic strangeness of game engine interfaces. The seminar supplements theory with practice, where students experimented with writing their own game worlds and interactions through a selected verb as new game mechanic.

Exhibition, Graphics & Course — Parade Town ArtCenter College of Design, DTLA, Los Angeles, CA, 2021

The project investigates the history of parades as promotions of unrecognized communities and declarations of desired power. Using augmented reality and computer vision, the projects celebrate the seen/unseen activity and expressions of downtown Los Angeles.

Taught students to create augmented reality lenses using custom computer vision models. Students used machine learning and data training as a visual anthropology study of the city. The course culminated in a public exhibition in downtown Los Angeles. Visitors used QR codes to launch individual student projects and trigger digital overlays that called attention to objects, scenes, or urban conditions selected by the student.

Designed 3D motion graphics and coded curtains to promote student exhibition and course Parade Town: A procession of augmented realities in DTLA.  Motion graphics produced in Unity, a game development engine, to simulate the behavior of augmented reality, hiding and revealing the title of the exhibition. 

Designed curtains with QR codes as a checkered textile pattern. Codes hosted augmented reality exhibition, launching individual AR projects during gallery pandemic closure. 

Work by — Alan Amaya, Jeremy Yijie Chen, Shiyi Chen, Dunstan Christopher, Elizabeth Costa, Noah Curtis, Cha Gao, Jingwei Gu, Sean Jiaxing Guo, Kate Ladenheim, Miaoqiong Huang, Blake Shae Kos, Jeung Soo Lee, Hongming Li, Tingyi Li, Fuyao Liu, Guowei Lyu, Yiran Mao, Elaine Purnama, Mario Santanilla, Qi Tan, Lucas Thin, Zeyu Wang, Zhiyan Wang, Zoey Wang, Christie Wu, Yue Xi, Haoran Xu, Qianyue Yuwen, Fanxuan Zhu

Exhibition and Teaching Team — John Brunley, Ben Hooker, Jenny Rodenhouse, & Christina Valentine

FINAL EXHIBITION GRAPHICS Song credits: Hollywood Freaks, Beck
FINAL EXHIBITION Videos split across two screens, left and right windows of gallery
FINAL CODED CURTAINS Scanning QR codes to launch and view augmented reality projects
DTLA Opening
Simulated parade of pedestrians hosting title and letterforms.
Type motion created from dancing motion capture
Video still
Video still
Coded curtains detail
Video still
Curtain  mockup created in Unity
Video still
Exhibition project: Resting Spaces,  Yiran Mao
QR code pattern study
Exhibition project: Coffee Shop Scooter: The Portable Gentrifier, Blake Shae Kos
Curtain pattern studies
Parade of Carrying by Yining Gao
Exhibition project: Custom dataset of ‘unlucky’ signifiers in DTLA for Parade Your Luck by Hongming Li  
Cruising dataset,  Alan Amaya

Course — IxP1: Intro to Prototyping Studio, Interaction Design, Immersion Lab, ArtCenter College of Design, Pasadena, CA, 2024

Redesigned our first term introduction to prototyping course. IxP1 introduces students to designing interaction through code. Drawing parallels between visual design and programming, the studio teaches students the foundations of design and human-computer communication to create a visual system of words, letters, figures, shapes, or other symbols that convey meaning to both humans and machines. In this course, students learn to design through iterative prototyping to develop their skills in creating live, time-based interfaces. I teach methods of research, prototyping, and design for Interaction Designers that are informed by the medium of code. The studio presents a series of tools (p5.js, Protopie, and Unity) and assignments to build towards one refined project.

Lab & Pedagogy — Immersion Lab

Faculty Director, ArtCenter College of Design, Pasadena, CA, 2016 - Ongoing

Started the Immersion Lab 8 years ago at ArtCenter as a space that provides equal access to emerging technologies for art, design, and research. 

Conduct design research on emerging technology through hands-on prototyping in support of cross-disciplinary course programming, IxD curriculum, and design education materials (lectures, workshops, tutorials).

Developed Technology-Centered Design Research methodology as a design research approach and communication strategy for students. It is a primary research method of understanding emerging technologies and mediums through investigative making to inform UX and 3D interaction design opportunities.  

Curate hardware and software library for investigative study - AR/VR, computer vision, artificial intelligence, machine learning, 3D engines, motion capture, spatial computing, and scanning. 
     
Collaborate across ArtCenter departments and with visiting Industry sponsors to create relevant courses that communicate to a diversity of students, learning types, and skill levels.

Sponsors/Partners — Patagonia, Meta Reality Labs, Unity, Leap Motion, Google Daydream, Gravity Sketch, HTC Vive, National Research Group, Oculus, Size Stream, and Snap Inc Research.

Immersion Lab Featured in Core77 — “ArtCenter's Jenny Rodenhouse on Integrating VR/AR Technologies into Design School Curriculum”. 

Tracking workshop, Virtual Furniture, Sam Giambalvo
Mixed reality lighting assignment, work by Feather Xu
3D scanning and rigging workshop in Unity, flowers dancing by Hans Jurisch
Motion capture
Mixed Reality furniture, turning a chair into a button assignment
Custom AR image trigger workshop, Applique project by Tianlu Tang & Christopher Taylor, Sponsored studio with Google Daydream team
Intro to Unity tutorial and workshop  
Studios and topics covered in lab
Immersion Lab talk, mixing and matching tech
Shooting student final documentation
Mixed reality safety, Sensewear by Isabel Li, Lucas Thin, Mingshan Wang, & Duoning Zheng
 
Intro to Unity non playerable characters, work by Isabel Li, Lucas Thin, Mingshan Wang, & Duoning Zheng
Digital Casting exhibition at ArtCenter DTLA featuring student work of emerging 3D scanning techniques
Human Factors in motion capture, using Kinect Azure
Students prototyping in lab
Learning from computer vision model, COCO
Everything as Input, using headset as camera to turn anything into an interactive input
Hosted livestream, a car that tours around student projects
Mixed reality tactility workshop  
18 — Raspberry Pi integration with Unity
Students working in lab
Virtual Campus course with CalTech in Second Life
Machined Influencers course poster
Highway as gallery space in Second Life
ARTV video exhibition graphics for hosted live stream of student work from Snap Research Creative Challenge studio  
Student prototyping frog tongue in Unity
Workshop with video grame actors

HOUSE — Jenny Rodenhouse

designer — educator