HOUSE is a multidisciplinary design studio. My name is Jenny Rodenhouse, I am an interaction designer and educator. I make playable graphics using 3D game engines/physics engines to produce form in action - interactive software, videos, research, models, graphics, exhibitions, installations, courses, labs, and curriculum. I use design to diversify concepts, expressions, and representations within computation. My projects are independent, collaborative, and client-based.
Associate Chair of Bachelor of Science BSc Undergraduate Interaction Design at ArtCenter College of Design. (1) Faculty Director of the Immersion Lab for 7 yrs, increasing student access to technology. Has taught for 9 yrs as Associate Professor for Undergraduate Interaction Design and Graduate Media Design Practices at ArtCenter. Faculty at Southern California Institute of Architecture (SCI-Arc).
Graduate of Syracuse University’s 5 year Industrial and Interaction Design program (BID) and received her MFA in Media Design from ArtCenter College of Design.
Previous interaction designer at Microsoft Research, Xbox Entertainment and Devices, and Windows Phone Advanced Development. Worked on the first Windows Phone platform, explored the future of transmedia entertainment, prototyped emerging gestural interactions, designed and shipped the Xbox 2011 interface, created NFL fantasy football on Xbox, and explored cross-platform social experiences for Microsoft Research and Xbox Live.
Publications: (1) Manual Dexterity: An Exploration of Simultaneous Pen + Touch Direct Input, CHI 2010: I Need Your Input. (2) Pen + Touch = New Tools, ACM Symposium on User Interface Software and Technology, CHI Alt.chi Paper. (3) Mixsourcing: Exploring Bounded Creativity as a Form of Crowdsourcing, Published by ACM Conference on Human Factors in Computing Systems.
Jury Chair for Core77 Design Awards, a Fellow at Nature, Art, & Habitat Residency in Sottochiesa Italy, and a Postgraduate Research Fellow at Media Design Practices ArtCenter College of Design in Pasadena California.
Work shown at art, architecture, and design events including — Netflix’s series The Future of ep. Gaming, LA’s Architecture and Design Museum, Venice Biennale of Architecture, Dutch Design Week, and Die Digitale, DDDD, Spring Break Art Show, FEMMEBIT, Navel, Roger’s Office Gallery, IxDA 2019, The Swiss Architecture Museum; Architektur Galerie Berlin; BODY and the Anthropocene; the Bi-City Biennale of Urbanism / Architecture; Architecture + Design Museum; Open City Art City Festival at Yerba Buena Center for the Arts; Post-Internet Cities Conference; The Graduate Center for Critical Studies; KAM Workshops: Artificial Natures; and CHI. Her projects have been featured in Wallpaper, The Guardian, Wired Magazine, Anti-Utopias, Test Plots Magazine.
VUI, Interview, & Video — DummyUnity, 6:05, 2024
Ventriloquy is the art or practice of making one's voice appear to come from somewhere else.
Designed a voice user interface to explore the mouth as a visual language; a shape that we read and have expectations for how it should behave.
Using lip synchronization software, the interface or dummy uses audio amplitude and phonemes to approximate a mouth shape known as a viseme: a visual equivalent of a phoneme or unit of sound in spoken language, creating a general facial image that can be used to describe a particular sound.
Separating the sound and shape, the simulation uses viseme misalignments to generate elaborate facial shapes, expressions, and new meaning.
The video documentation uses audio from an interview with Crazy Minnow Studio, creators of SALSA Lip-Sync an animation tool used to puppeteer character mouths in video games.
Thank you to Crazy Minnow Studio
FINAL VIDEO ExcerptExcerptExcerptUnity prototype Unity simulation - waiting mode when no audio is presentStill from Unity simulationCurriculum — Associate Chair IxD BSc
ArtCenter College of Design, Pasadena, CA, 2023Course — Everything as Input Studio, Meta Reality Labs, Graduate Media Design Practices, Immersion Lab, ArtCenter College of Design, Pasadena, CA, 2023
Collaborated with faculty Ben Hooker and Meta Reality Labs to create a graduate studio that critically interrogated the future of augmented reality interfaces. Computer vision is a computation perspective trained to identify and interpret our environment through pattern-making, sensing, and tracking. This algorithmic point of view has transformed our visual field into new forms of machine sensing and controlling, turning everything within its field of view into an input that designers need to learn to create for and interact with. Together with Meta Reality Labs, we examined the applications and implications of augmented reality and computer vision around these 3 subjects: Perception, Privacy, and Power. Students trained custom computer vision models and researched the semiotics of datasets.
Teaching Team — Jenny Rodenhouse, Ben Hooker, & John Brumley TA — Alan Amaya Meta Reality Labs, University Collab Program — Michael Ishigaki, Roger Ibars, Aaron Faucher, Ata Dogan
Specimens — Lips Lambda Vue, 2024
Computer vision models can recovers speech from the vibrations off objects. Using Lambda Vue, a software that amplifies small motions in video, the project creates type specimens from object vibrations.
Placing different materials under a microscope we capture and translated speech data from this visual information. Here we said tinfoil over tinfoil and extracted visual markers for each sound and letter.
In collaboration with Caroline Trigo, Mavis Yue Cao, Christie Wu
Video that visualizing the vibrations of speech. Saying tinfoil over tinfoil. Before and after Lamda Vue software processing TYPE SPECIMEN - TinfoylTinfoil grid to define spoken phenomes and visual letters. Separating each frame and phenome‘T’ framesIdentifying ‘T’ sound markers and characteristics in the materialSelecting high contrast markers for each phenomeIdentifying sound markers and characteristics in the materialSound markersMicroscope setupMicroscope setup in Immersion Lab Video that visualizing the vibrations of speech on a strawberry. Course — Player Non Player Seminar, History and Theory, Southern California Institute of Architecture (SCI-Arc), Los Angeles, CA, 2024
Teaching Team — Alice Bucknell & Jenny Rodenhouse
Created a graduate History and Theory seminar in collaboration with Alice Bucknell, for Southern California Institute of Architecture. The seminar explored the game engine as a perceptual platform that turns the world into an interface where everything within its field of view is playable. Toggling across histories and philosophies of interaction design and their simulated architectures, Player Non Player roams game systems, semiotics, and the ever-dissolving boundary between player and NPC. Across reading presentations, discussions, game jams, and writing exercises, students examined the tactical possibilities and semiotic strangeness of game engine interfaces. The seminar supplements theory with practice, where students experimented with writing their own game worlds and interactions through a selected verb as new game mechanic.
Exhibition, Graphics & Course — Parade ArtCenter College of Design, DTLA, Los Angeles, CA, 2021
The project investigates the history of parades as promotions of unrecognized communities and declarations of desired power. Using augmented reality and computer vision, the projects celebrate the seen/unseen activity and expressions of downtown Los Angeles.
Taught students to create augmented reality lenses using custom computer vision models. Students used machine learning and data training as a visual anthropology study of the city. The course culminated in a public exhibition in downtown Los Angeles. Visitors used QR codes to launch individual student projects and trigger digital overlays that called attention to objects, scenes, or urban conditions selected by the student.
Designed 3D motion graphics and coded curtains to promote student exhibition and course Parade Town: A procession of augmented realities in DTLA. Motion graphics produced in Unity, a game development engine, to simulate the behavior of augmented reality, hiding and revealing the title of the exhibition.
Designed curtains with QR codes as a checkered textile pattern. Codes hosted augmented reality exhibition, launching individual AR projects during gallery pandemic closure.
Work by — Alan Amaya, Jeremy Yijie Chen, Shiyi Chen, Dunstan Christopher, Elizabeth Costa, Noah Curtis, Cha Gao, Jingwei Gu, Sean Jiaxing Guo, Kate Ladenheim, Miaoqiong Huang, Blake Shae Kos, Jeung Soo Lee, Hongming Li, Tingyi Li, Fuyao Liu, Guowei Lyu, Yiran Mao, Elaine Purnama, Mario Santanilla, Qi Tan, Lucas Thin, Zeyu Wang, Zhiyan Wang, Zoey Wang, Christie Wu, Yue Xi, Haoran Xu, Qianyue Yuwen, Fanxuan Zhu
Exhibition and Teaching Team — John Brunley, Ben Hooker, Jenny Rodenhouse, & Christina Valentine
FINAL EXHIBITION GRAPHICS Song credits: Hollywood Freaks, Beck
FINAL EXHIBITION Videos split across two screens, left and right windows of gallery
FINAL CODED CURTAINS Scanning QR codes to launch and view augmented reality projectsDTLA OpeningSimulated parade of pedestrians hosting title and letterforms. Type motion created from dancing motion capture Video stillVideo stillCoded curtains detailVideo stillCurtain mockup created in UnityVideo stillExhibition project: Resting Spaces, Yiran MaoQR code pattern studyExhibition project: Coffee Shop Scooter: The Portable Gentrifier, Blake Shae KosCurtain pattern studiesParade of Carrying by Yining GaoExhibition project: Custom dataset of ‘unlucky’ signifiers in DTLA for Parade Your Luck by Hongming Li Cruising dataset, Alan AmayaCourse — IxP1: Intro to Prototyping Studio, Interaction Design, Immersion Lab, ArtCenter College of Design, Pasadena, CA, 2024
Redesigned our first term introduction to prototyping course, IxP1 introduced students to designing interactions through code. Drawing parallels between visual design and programming, the studio taught students the foundations of design and human-computer communication to create a visual system of words, letters, figures, shapes, or other symbols that convey meaning to both humans and machines. In this course, students learned to design through iterative prototyping to develop their skills in creating live, time-based interfaces. The studio presented a series of programming languages and tools (p5.js, Protopie, and Unity). Students designed a series of actions and graphical control systems.
Lab & Pedagogy — Immersion Lab
Faculty Director, ArtCenter College of Design, Pasadena, CA, 2016 - Ongoing
Started the Immersion Lab 8 years ago at ArtCenter as a space that provides equal access to emerging technologies for art, design, and research.
Conduct design research on emerging technology through hands-on prototyping in support of cross-disciplinary course programming, IxD curriculum, and design education materials (lectures, workshops, tutorials).
Developed Technology-Centered Design Research methodology as a design research approach and communication strategy for students. It is a primary research method of understanding emerging technologies and mediums through investigative making to inform UX and 3D interaction design opportunities.
Curate hardware and software library for investigative study - AR/VR, computer vision, artificial intelligence, machine learning, 3D engines, motion capture, spatial computing, and scanning.
Collaborate across ArtCenter departments and with visiting Industry sponsors to create relevant courses that communicate to a diversity of students, learning types, and skill levels.
Sponsors/Partners — Patagonia, Meta Reality Labs, Unity, Leap Motion, Google Daydream, Gravity Sketch, HTC Vive, National Research Group, Oculus, Size Stream, and Snap Inc Research.
Immersion Lab Featured in Core77 — “ArtCenter's Jenny Rodenhouse on Integrating VR/AR Technologies into Design School Curriculum”.
Tracking workshop, Virtual Furniture, Sam Giambalvo Mixed reality lighting assignment, work by Feather Xu3D scanning and rigging workshop in Unity, flowers dancing by Hans JurischMotion capture Mixed Reality furniture, turning a chair into a button assignmentCustom AR image trigger workshop,
Applique project by Tianlu Tang & Christopher Taylor, Sponsored studio with Google Daydream team
Intro to Unity tutorial and workshop Studios and topics covered in lab Immersion Lab talk, mixing and matching techShooting student final documentationMixed reality safety, Sensewear by Isabel Li, Lucas Thin, Mingshan Wang, & Duoning Zheng
Intro to Unity non playerable characters, work by Isabel Li, Lucas Thin, Mingshan Wang, & Duoning ZhengDigital Casting exhibition at ArtCenter DTLA featuring student work of emerging 3D scanning techniquesHuman Factors in motion capture, using Kinect Azure Students prototyping in labLearning from computer vision model, COCOEverything as Input, using headset as camera to turn anything into an interactive inputHosted livestream, a car that tours around student projectsMixed reality tactility workshop 18 — Raspberry Pi integration with UnityStudents working in labVirtual Campus course with CalTech in Second LifeMachined Influencers course posterHighway as gallery space in Second LifeARTV video exhibition graphics for hosted live stream of student work from Snap Research Creative Challenge studio Student prototyping frog tongue in Unity Workshop with video grame actors
Course — Intro to Mixed Reality Studio, ArtCenter College of Design, Pasadena, CA, 2018 - Ongoing
Created this course 7 years ago, as one of the first courses to investigate creative opportunities with emerging spatial technologies. This course is an introduction to mixed reality, spatial interaction, and design development workflows in Unity. This seemingly new field of ‘mixed reality’ builds upon the history of human-computer interaction, designing the ability to craft and control digital information with the human body. In this studio, we will use an array of mixed reality inputs (camera and sensors) and outputs (Unity) to create mixed reality objects that interweave digital and physical forms, senses, behaviors, and materials. Together we will utilize Technology-Centered Research methodologies to define an insight and design new ways of interacting with 3D space.
Teaching Team — Jenny Rodenhouse & John Brumley