Ph.D. Computer Science, UCLA 2007
M.S. Computer Science, UCLA 2002
Caught up in the dot-com madness 1993 - 2001
Mobile 3D character platform Mobile platform for quickly generating chat- or interaction-based virtual humans/characters. Allows fast scripting of complex functionality such as speech, nonverbal behavior, lip syncing to speech.
Fast Avatar Capture Software is a tool for automatically capturing a 3D avatar of a human subject in less then a few minutes without a need for a separate operator. From Evan Suma, Andew Feng, Richard Wang, Ari Shapiro
Autorigger and reshaper is a tool for automatically rigging, skinning and reshaping a 3D human body scan obtained from a RGB-D sensor (such as Microsoft Kinect, Intel RealSense, Occipital Structure Sensor and the like) or a 3D scaning cage.
SmartBody is a character animation system that gives an interactive character an extensive set of cpabilities and behaviors, such as: locomotion, steering, object manipulation, speech synthesis, emotional expression, gesturing, physical simulation, gazing among others.
The DANCE software is used for physics-based animation research, including dynamic simulation of rigid bodies, motion capture and dynamic control.
Qiaomu Miao, Sinhwa Kang, Stacy Marsella, Steve DiPaola, Chao Wang, Ari Shapiro; Study of detecting behavioral signatures within DeepFake videos, PETRA '22: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, June 2022, paper, video
MP Aylett, A Shapiro, S Prasad, L Nachman, S Marcella, P Scott-Morgan; Peter 2.0: Building a Cyborg, PETRA '22: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, June 2022
Ari Shapiro, Anton Leuski, Stacy Marsella; UBeBot: voice-driven, personalized, avatar-based communicative video content in A/R, SIGGRAPH Appy Hour, July 2019, Los Angeles, CA
Yajie Zhao, Zeng Huang, Tianye Li, Weikai Chen, Chloe LeGendre, Xinglei Ren, Jun Xing, Ari Shapiro, Hao Li; Learning Perspective Undistortion of Portraits, ArXiv, May 2019
Setareh Nasihati Gilani, David Traum, Rachel Sortino, Grady Gallagher, Kailyn Aaron-Lozano, Cryss Padilla, Ari Shapiro, Jason Lamberton and Laura-Ann Petitto; Can a Signing Virtual Human Engage a Baby's Attention?, Proceedings of The Nineteenth Annual Conference on Intelligent Virtual Agents, July 2019, Paris, France
Ulysses Bernadette, Sin-hwa Kang, Andrew Feng, Steve DiPaola and Ari Shapiro; Speech Breathing in Virtual Humans: An Interactive Model and Empirical Study
, Proceedings of The Fourth IEEE VR Workshop on Virtual Humans and Crowds in Immersive Environments, March 2019, Osaka, Japan (paper, video)
Andrea Bönsch, Andrew Feng, Parth Patel and Ari Shapiro; Volumetric Video Capture Using Unsynchronized, Low-Cost Cameras
, Proceedings of the 14th International Conference on Computer Graphics Theory and Applications, February 2019 (paper, video)
N Wang, D Schwartz, G Lewine, A Shapiro, A Feng, C Zhuang; Addressing Sexist Attitudes on a College Campus through Virtual Role-Play with Digital Doppelgangers, Proceedings of the 18th International Conference on Intelligent Virtual Agents, November 2018
N Wang, A Shapiro, A Feng, C Zhuang, C Merchant, D Schwartz; Learning by Explaining to a Digital Doppelganger, International Conference on Intelligent Tutoring System, May 2018
B. Scassellati, J. Brawerm, K. Tsui, S. Gilani, M. Malzkuhn, B. Manini, A. Stone, G. Kartheiser,
A. Merla, A. Shapiro, D. Traum, L.A. Petitto; Teaching Language to Deaf Infants with a Robot and a Virtual Human, Proceedings of SIGCHI, January 2018 (pdf)
H. Wauck, G. Lucas, A. Best, A. Shapiro, A. Feng, J. Boberg, J. Gratch; Analyzing the Effect of Avatar Self-Similarity on Men and Women in a Search and Rescue Game, Proceedings of SIGCHI, January 2018 (pdf)
N Wang, A Shapiro, A Feng, C Zhuang, D Schwartz, S Goldman; An Analysis of Student Belief and Behavior in Learning by Explaining to a Digital Doppelganger, Proceedings of the 8th Workshop on Personalization Approaches in Learning Environments (PALE 2018), June 2018
S. Narang, A. Best, A. Shapiro, D. Manocha; Generating Virtual Avatars with Personalized Walking Gaits Using Commodity Hardware
, Proceedings of Thematic Workshops, ACM Multimedia, October 2017 (pdf)
U. Bernadet, S.H. Kang, A. Feng, S. DiPaola, A. Shapiro; A Dynamic Speech Breathing System for Virtual Characters
, 17th International Conference on Intelligent Virtual Agents, Stockholm, Sweden, August 2017 (paper)
A. Feng, E. Suma Rosenberg, A. Shapiro; Just-in-time 3D avatars from scans, 30th Conference on Computer Animation and Social Agents (CASA), Seoul, Korea, May 2017
S. Narang, A. Best, A. Feng, S.H. Kang, D. Manocha, A. Shapiro; Motion Recognition on Self and Others on Realistic 3D Avatars, 30th Conference on Computer Animation and Social Agents (CASA), Seoul, Korea, May 2017
A. Bonsch, T. Vierjahn, A. Shapiro, T. Kuhlen; Turning Anaonymous Members of a Multiagent System into Individuals, Workshop on Virtual Humans and Crowds in Immersive Environments (VHCIE), Los Angeles, California, March 2017
S. Narang; A. Best; T. Randhavane; A. Shapiro; D. Manocha, PedVR: Simulating Natural Interactions between a Real User and Virtual Crowds, 22nd ACM Symposium on Virtual Reality Software and Technology (VRST), Munich, Germany, November, 2016 (paper, project)
G. Lucas, E. Szablowski, J. Gratch, A. Feng, T. Huang, J. Boberg, A. Shapiro, The effect of operating a doppleganger in a 3D simulation, ACM SIGGRAPH Conference on Motion in Games, San Francisco, CA, October, 2016 (paper) Best Presentation award!
S.H. Kang, A. Feng, M. Seymour, A. Shapiro, Study comparing video-based characters and 3D based characters on mobile
devices for chat, ACM SIGGRAPH Conference on Motion in Games, San Francisco, CA, October, 2016 (paper)
M. Chollet, N. Chandrashekhar, A. Shapiro, S. Scherer, L.P. MorencyManipulating the Perception of Virtual Audiences using Crowdsourced Behaviors, 16th International Conference on Intelligent Virtual Agents, Los Angeles, CA, September, 2016 (paper)
D. Casas, A. Feng, O. Alexander, G. Fyffe, R. Ichikari, P. Debevec, H. Li, K. Olszewaki, E. Suma, D. Casas, A. Shapiro, Photorealistic Blendshape Modeling from RGB-D Sensors, 29th Conference on Computer Animation and Social Agents, Geneva, Switzerland, May 23rd-25th, 2016 (paper, video)
A. Feng, D. Casas, A. Shapiro, Avatar Reshaping and Automatic Rigging Using a Deformable Model, ACM SIGGRAPH Conference on Motion in Games, Paris, France, November 16th-18th, 2015 (paper, video, software)
M. Papefthymiou, A. Feng, A. A. Shapiro, G. Papagiannakis, A fast and robust pipeline for populating mobile AR scenes with gamified virtual characters, SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications, Kobe, Japan, November 2-5, 2015
S.H Kang, A. Feng, A. Leuski, D. Casas, A. Shapiro, Effect of an Animated Virtual Character on Mobile Chat Interactions, Proceedings of th 3rd International Conference on Human-Agent Interaction, Daegu, Korea, October 21-24, 2015 (video, paper, bibtex)
A. Feng, A. Leuski, S. Marsella, D. Casas, S.H Kang, A. Shapiro, A Platform for Building Mobile Virtual Humans, Proceedings of th 15th International Conference on Intelligent Virtual Agents, Delft, Netherlands, August 26-28, 2015 (paper, bibtex, software)
A. Feng, G. Lucas, S. Marsella, E. Suma, C.C. Chiu, D. Casas, A. Shapiro, Acting the Part: The Role of Gesture in Avatar Identity, ACM SIGGRAPH Conference on Motion in Games, Los Angeles, CA, November 6-8, 2014 (paper, video)
E. Miguel, A. Feng, A. Shapiro, Towards Cloth-Manipulating Characters, The 27th Conference on Computer Animation and Social Agents, Houston, TX, May 26-28 (paper, video)
A. Shapiro, A. W. Feng, R. Wang, G. Medioni, H. Li, M. Bolas, E. Suma, Rapid Avatar Capture Using Commodity Sensors, The 27th Conference on Computer Animation and Social Agents, Houston, TX, May 26-28 (paper, video)
A. W. Feng, Y. Huang, Y. Xu, A. Shapiro, Fast, Automatic Character Animation Pipelines, Journal of Visualisation & Computer Animation (paper preprint, bibtex)
Y. Xu, A. W. Feng, S. Marsella, A. Shapiro, A Practical And Configurable Lip Sync Method for Games, ACM SIGGRAPH Conference on Motion in Games, Dublin, Ireland, November 2013 (paper, video, bibtex)
L. Batrinca, G. Stratou, A. Shapiro, L.P. Morency, S. Scherer, Cicero-towards a multimodal virtual audience platform for public speaking training, 13th International Conference on Intelligent Virtual Agents, Edinburgh, UK, August 2013 (paper, bibtex)
A. Hartholt, D. Traum, S. Marsella, A. Shapiro, G. Stratou, A. Leuski, L.P. Morency, J. Gratch, All Together Now: Introducing the Virtual Human Toolkit, 13th International Conference on Intelligent Virtual Agents, Edinburgh, UK, August 2013 (paper, bibtex)
S. Marsella, A. Shapiro, A. W. Feng, Y. Xu, M. Lhommet, S. Scherer,Towards Higher Quality Character Performance in Previz, Digital Production Symposium, Anaheim, CA July 2013 (paper, bibtex)
S. Marsella, Y. Xu, A. W. Feng, M. Lhommet, S. Scherer, A. Shapiro, Virtual Character Performance from Speech, Symposium on Computer Animation, Anaheim, CA July 2013 (paper, video, bibtex)
A. Shapiro, A. W. Feng, The Case for Physics Visualization in an Animator's Toolset, 8th International Conference on Computer Graphics Theory and Applications, Barcelona, Spain, February, 2013 (pdf, bibtex)
A. Feng, Y. Huang, Y. Xu, A. Shapiro, Automating the Transfer of a Generic Set of Behaviors Onto a Virtual Character, The Fifth international conference on Motion in Games, Rennes, France, November, 2012 (pdf, video, bibtex) Best Paper award!
A. Feng, Y. Huang, M. Kallmann, A. Shapiro, An Analysis of Motion Blending Techniques, The Fifth international conference on Motion in Games, Rennes, France, November, 2012 (pdf, video1, video2, bibtex)
A. Feng, Y. Xu, A. Shapiro, An Example-Based Motion Synthesis Technique for Locomotion and Object Manipulation, Symposium of Interactive 3D Graphics and Games, Costa Mesa, CA, March 2012 (pdf, video, bibtex)
A. Shapiro, Building a Character Animation System, Invited Talk, Motion in Games, 2011 (pdf, bibtex)
Welbergen van, H. and Xu, Yuyu and Thiebaux, M. and Feng, WW and Fu, J. and Reidsma, D. and Shapiro, A., Demonstrating and Testing the BML Compliance of BML Realizers, IVA 2011, (pdf, bibtex)
A. Shapiro, S.H. Lee, Practical Character Physics For Animators, IEEE Computer Graphics and Applications, July/August 2011 (pdf, video, bibtex)
B. Allen, D. Chu, A. Shapiro, P. Faloutsos, On Beat! Timing and Tension for Dynamic Characters, ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), ACM Press, August, 2007 (pdf, video, bibtex).
A. Shapiro, D. Chu, B. Allen, P. Faloutsos, The Dynamic Controller Toolkit, The 2nd Annual ACM SIGGRAPH Sandbox Symposium on Videogames, San Diego, CA, August, 2007 (pdf, videos), (bibtex)
A. Shapiro, M. Kallmann, P. Faloutsos, Interactive Motion Correction and Object Manipulation, Symposium on Interactive 3D Graphics and Games, Seattle, Washington, April, 2007
A. Shapiro, Y. Cao, P. Faloutsos, Style Components, Graphics Interface 2006, Quebec City, Quebec, Canada, June, 2006.
(pdf, videos), (bibtex)
A. Shapiro, P. Faloutsos, V. Ng-Thow-Hing, Dynamic Animation and Control Environment, Graphics Interface 2005, p. 61-70, Victoria, British Columbia, Canada, May, 2005.
A. Shapiro, F. Pighin, P. Faloutsos, Hybrid Control For Interactive Character Animation, The Eleventh Pacific Conference on Computer Graphics and Applications, p. 455-460, Canmore, Alberta, Canada, October, 2003.
A. Feng, E. Suma, A. Shapiro. Just-in-time, viable, 3D avatars from scans, SIGGRAPH 2017 Talks, Los Angeles, California, August 2017
D. Casas, O. Alexander, A. Feng, G. Fyffe, R. Ichikari, P. Debevec, R. Wang, E. Suma, A. Shapiro. My Digital Face, SIGGRAPH 2015 Real Time Live, Los Angeles, California, August 2015 (video)
D. Casas, O. Alexander, A. Feng, G. Fyffe, R. Ichikari, P. Debevec, R. Wang, E. Suma, A. Shapiro. Blendshapes from Commodity RGB-D Sensors, SIGGRAPH 2015 Talks, Los Angeles, California, August 2015 (paper, video)
A. Shapiro, A. Feng, W. Ruizhe, H. Li, M. Bolas, G. Medioni, E. Suma, Make Me An Avatar, SIGGRAPH 2014 Real Time Live, Vancouver Canada, August 2014
A. Feng, A. Shapiro, W. Ruizhe, H. Li, M. Bolas, G. Medioni, E. Suma, Rapid Avatar Capture and Simulation Using Commodity Depth Sensors, SIGGRAPH 2014 Talk, Vancouver, Canada, August 2014 (pdf)
A. Shapiro, S.H. Lee, Practical Character Physics For Animators, SIGGRAPH 2009 Talk, New Orleans, LA, August 2009 (pdf)
J. Bayever, J. Gordon, G. McMillan, Y. Lakhani, J. Mancewicz, A. Shapiro, Making Statues Move, SIGGRAPH 2008 Talk, Los Angeles, CA, August 2008
A. Shapiro, P. Faloutsos, Interactive and Reactive Control, SIGGRAPH 2005 Sketches, Los Angeles, CA, August 2005
A. Shapiro, Y. Cao, P. Faloutsos, Interactive Motion Decomposition, SIGGRAPH 2004 Sketches, Los Angeles, CA, August 2004
A. NewsShapiro, P. Faloutsos, Complex Character Animation that Combines Kinematic and Dynamic Control, SIGGRAPH 2003 Sketches & Applications, San Diego, CA, July 2003.
Refereed Posters & Demos
G. Lucas, E. Szablowski, J. Gratch, A. Feng, T. Huang, J. Boberg, and A. Shapiro, Do Avatars that Look Like their Users Improve Performance in a Simulation?, IVA 2016, Los Angeles, CA, September, 2016
R. Artstein, A. Gainer, K. Georgila, A. Leuski, A. Shapiro, D. Traum, New Dimensions in Testimony Demonstration, NAACL 2016, San Diego, CA, June 2016
S.h. Kang, A. Feng, A. Leuski, D. Casas, A. Shapiro, Smart Mobile Virtual Humans: "Chat with me!", IVA 2015, Delft, Netherlands, August 2015
D. Casas, O. Alexander, G. Fyffe, R. Ichikari, R. Wang, P. Debevec, E. Suma, A. ShapiroRapid Photorealistic Blendshapes from Commodity RGB-D Sensors, I3D 2015, San Francisco, California, March 2015 Best Poster Award!paper
A. Shapiro, A. Feng, R. Wang, G. Medioni, E. Suma, Automatic Acquisition and Animation of Virtual Avatars, IEEE VR 2014, Minnesota, March 2014 Honorable Mention award!
A. Leuski, A. Shapiro, R. Gowrisankar, Y. Xu, T. Richmond, A. Feng, Mobile Personal Healthcare Mediated by Virtual Humans, Proceedings of the companion publication of the 19th international conference on Intelligent User Interfaces, Haifa, Israel, February 2014 paper, bibtex
E. Miguel, A. Feng, Y. Xu, A. Shapiro, Towards Cloth-Manipulating Characters, ACM SIGGRAPH Conference on Motion in Games, Dublin, Ireland, November 2013
Y. Xu, A. Feng, A. Shapiro, A Simple Method for High Quality Lip Syncing, Symposium of Interactive 3D Graphics and Games 2013, Orlando, Florida, March, 2013
A. Shapiro, D. Chu, P. Faloutsos The Controller Toolkit, Symposium of Computer Animation 2006, Posters & Demos, Vienna, Austria, August 2006
M. Kallmann, A. Shapiro, P. Faloutsos, Planning Motions in Motion, Symposium of Computer Animation 2006, Posters & Demos, Vienna, Austria, August 2006
A. Shapiro, P. Faloutsos, Steps Toward Intelligent Interactive Control, Symposium of Computer Animation 2005, Posters & Demos, Los Angeles, CA, July 2005
A. Shapiro, P. Faloutsos, Victor Ng-Thow-Hing, Dynamic Animation and Control Environment, Eurographics Symposium on Computer Animation, Posters & Demos, Grenoble, France, August 2004
A. Shapiro, Y. Cao, P. Faloutsos, Stylistic Motion Decomposition, Eurographics Symposium on Computer Animation, Posters & Demos, Grenoble, France, August 2004
Book Chapters & Demos
L.P. Morency, A. Shapiro, S. Marsella, Embodied Autonomous Agents, chapter in Handbook of Virtual Environments: Design, Implementation, and Applications, 2015
L.P. Morency, A. Shapiro, S. Marsella, Modeling Human Communication Dynamics for Virtual Humans, Coverbal Synchrony in Human-Machine Interaction, CRC Press, 2013
Alvin and the Chipmunks: The Squeakquel, Rhythm & Hues Studios 2009 (feature film)
The Incredible Hulk, Rhythm & Hues Studios 2008 (feature film)
The Force Unleashed, Industrial Light & Magic/LucasArts 2008 and The Force Unleashed, Ultimate Sith Edition 2009(video games)
A web-based version of the game of Diplomacy. Originally developed by Guy Tsafnat and
myself, this version is written in Java and plugs into a JSP-compliant
webserver. This was used as a testbed for my automated player and can
currently self-play approximately 1000 games/day.
Sacramento 2001 for my friend's bachelor party. This part of the trip was called Chunder. Not all of us made it through the falls. Here's the entire sequence if you'd like to see it.
August 8, 2022
At some point, deepfake videos will be pixel perfect, indistinguishable to the naked eye, and synthetic voice replication will similarly be indistinguishable from a real voice to the human ear. We publish a paper showing that an underlying behavioral signal might be detectable regardless of image quality. paper
December 12, 2021
My patent to turn a person into a 3D avatar then animate it has been granted by the U.S. patent office. This includes the generation of usable hands and a high resolution face. "Rapid avatar capture and simulation using commodity depth sensors"
October 21st, 2020
Pandorabots and Embody Digital team up for Bot Battle: an embodied, 3D chat battle between Loebner Prize-winning chatbot Mitsuku from Pandorabots against Facebook's BlenderBot. You can watch the battle on kuki.ai or streaming on Twitch at https://www.twitch.tv/kuki_ai from October 21st, 2020 until November 3rd, 2020. Embody Digital's automatic animation technology powers the characters by converting their text answers into conversational voice, animation and emotion.
June 8th, 2020
Embody Digital reveals its patent for automatically generating movement from text or voice for digital humans and robots. The patent covers input from a conversational AI system into body lanuage and nonverbal behavior derived from those words. So if you want to power a digital human from Google DialogFlow, IBM Watson, Amazon Lex or any other AI, it's likely that this patent covers that process.
December 9th, 2019
Keynote talk at the IEE AIVR CRDH (Capture Rendering of Digital Humans) Workshop (https://aivr2019.github.io/CRDH-workshop/) entitled "Digital humans: models of behavior and interactivity"
October 1st, 2019 Embody Digital releases AI Expert : build a talking avatar in minutes using only a Google Sheets spreadsheet.
May 17th, 2019 Embody Digital's patented software powers robots, avatars, and now...cyborgs...
Dr. Peter Scott Morgan, like Stephen Hawking, has 'terminal' motor neuron disease (MND) and will soon be unable to speak or respond physically to others.
But he had his voice cloned into a synthetic text-to-speech voice, and his face scanned to produce a 3D digital likeness of himself.
And now, using the same assistive technology system that Stephen Hawking used, will be able to engage in conversation through text and power his avatar with emotion and expression through Embody Digital's automated performance system that animates the head, face and lips from simple text and emotion tags, like this:
"[happy] Hello, I am the future. [surprised] For decades to come I will keep Peter's personality [agree] alive and for all the time, I will continue to evolve. [sad] Dying as a human, [happy] living as a cyborg."
Peter describes this as part of becoming 'Peter 2.0', and is an ongoing project with numerous collaborators.
January 6th, 2019 Embody Digital has released a beta of UBeBot - avatarize yourself with a selfie, talk, act out what you say, make videos and share on social media. Now available on the Google Play Store.
December 23rd, 2018 Report, from Italy, has covered my work with Dr. Leslie Saxon to create digital doctors in their Health 4.0 show.
October 23rd, 2018 Embody Digital has won the 2018 Best Technology Award at the Visual 1st conference in San Francisco! Here's a quote from the conference:
Judge Andy Kelm summarizes his take on Embody Digital, "We found that Embody Digital stood out among a very competitive group of technology-oriented entrants. Not only is the Embody technology novel and innovative, but we also found it to be extremely timely given the significant interest and investment going into AR and VR. We see a wide range of commercial options from skins & emotes in gaming platforms to B2B opportunities like Amazon Sumerian, and we look forward to seeing what the Embody Digital team does next.
BBC Click coverage of the avatar construction process in the context of preserving legacy.video
January 28th, 2017 Embody Digital's beta A/R Avatar Director app that allows you to automatically animate an avatar in A/R with only your voice.
August 9th, 2017 Coverage of our Digital Doctors project:
May 10th, 2017
UploadVR coverage of our 20 minute avatars for virtual reality:
May 10th, 2017
Reuters coverage of our digital doctors project:
March 19th, 2017
I gave a keynote talk at the Virtual Humans and Crowds for Immersive Environments workshop (https://sites.google.com/site/vhcieieeevr2017/) co-located with the IEEE VR conference. The talk was titled "Towards the creation of a digital 'you' for immersive environments"
October 9th, 2016
Co-presented the DocOn protoype app at the USC Body Computing Conference at USC. My team created a 'digital world expert in cardiology and atrial fibrulation' from Dr. Leslie Saxon. The idea is to 'scale' the reach of a world expert in various health areas and provide information to those that do not have access to such resources. The digital doctor was built using our Rapid Avatar pipeline, the mobile app built using our mobile virtual humans.
July 1st, 2016
Started my appointment as research faculty at the University of Southern California in the Viterbi School of Engineering.
June 28th, 2016
I was a keynote speaker at the Computer Graphics International (CGI) conference in Crete, Greece. http://www.ics.forth.gr/CGI2016. My presentation was entitled "Rapid Creation of Digital Characters"
March 7th, 2016
Our new process for creating a photorealistic virtual character combining bodies and faces using commodity hardware that takes only 20 minutes with no artistic intervention or technical expertise.
February 18th, 2016
Uploaded my Practical Character Physics for Animators talk which covers the need for physics visualization in an animator's toolset with many examples from real films:
February 3rd, 2016
We have released all the software and process needed to scan, rig and create your own avatar in minutes.
January 29th, 2016
Coverage of Dr. Leslie Saxon's initiative to virtualize doctors as a means to provide additional avenues of communication to medical experts. My team put together a prototype of the virtual doctor using our avatar technologies as an economical way to generate a virtual character of specific person. Typically, creating a photorealistic digital representation of a particular person takes a massive amount of 3D expertise and time. We were able to accomplish this in about 2 days, and today that process would take around 4 hours, making such a representation economically viable for a large number of people. You can see my researcher Andrew Feng with Dr. Leslie Saxon doing facial capture.
November 14th, 2015
Automatic rigging and reshaping tool for 3D human body scans. You can see it in action here:
September 10th, 2015
Our latest work accepted at the 2015 ACM SIGGRAPH Motion in Games Conference (MIG 2015) showing our automated rigging and body reshaping from RGB-D or photogrammetry scans.
August 14th, 2015
My summer intern, Marco Volino, put together a 100-camera photogrammetry cage based on Raspberry Pis for his summer project in my lab. The result is that we can now consruct a high-quality 3D model from a body scan in about 15 minutes. The video show some results from the system and includes many of USC ICT's summer interns (everybody loved to be scanned and see themselves in 3d...)
August 12th, 2015
Our presentation at SIGGRAPH's Real Time Live called 'My Digital Face'. We scanned, constructed and animated a photorealistic face (of my colleague and co-presenter, Evan Suma) in 5 minutes using a single Intel RealSense sensor.
July 4th, 2015
Photorealistic faces from RGB-D sensors
Results from our rapid blendshapes pipeline using the Intel RealSense sensor. We'll be demonstrating the scanning processing and control of a photorealistic digital face in 5 minutes at this year's SIGGRAPH 2015 Real Time Live event.
April 27nd, 2015
Photorealistic faces from RGB-D sensors
Our latest project showing the generation of a set of blendshapes from a single RGB-D sensor using a near-automatic pipeline. We won Best Poster at the I3D 2015 conference in March for this work.
April 17, 2014
News article from USC's Viterbi School of Engineering on the Fast Avatar Capture and Simulation work:
February 21, 2014
Here is a Gizmodo article on our Rapid Avatar Capture and Simulation project where we can capture a person using a first-generation Microsoft Kinect system and simulate them in a matter of minutes.
We believe that this kind of capability dramatically changes the economics of avatar capture (essentially, it's now free and takes very little time) and will have an impact on 3D character acquisition and use going forward.
Here's a video of the entire capture process:
I volunteered for a project out of ICT's Graphics Lab. My face and performance were captured by their Light Stage (I was not involved with the technical aspects). Here are the results, as shown on different technological platforms by both Nvidia:
In the days following the capture, I would walk by my colleagues in the Graphics Lab (my office is very close to theirs) and they would be studying me very closely as I walked by. Occassionally they would say things like "You should see what we are doing to 'Ari' today.". In didn't take long before I insisted that they call my digital doppleganger 'Ira' instead of 'Ari' to loosen some of this association between myself and this digital version of myself. You can do what you want with Ira; it gets a little personal when you are doing it to 'me'. I'm sure that as this phenomena of capturing a person and digitizing them, then putting their digital version in various situations will lead to a number of psychological studies, particularly now that the distinction between the two is getting smaller and smaller.
In case you are curious, the 'yogurt parfait' incident came when the director (Oleg Alexander from ICT) asked me to get mad about something so that they could record some kind of emotional expression. About a week before the capture session, I had stopped by McDonalds in the morning for their $1 sausage muffin (substituting sausage for egg) and the $1 fruit and yogurt parfaits, as I had done so a few times a week for the past month. Usually the strawberries are a bit too cold, and sometimes frozen, so I would typically eat the yogurt, and would sometimes not even touch the strawberries, depending on how icy and cold they were. That one day, they gave me an entire plastic cup full of frozen, hard strawberries without a bit of yogurt, which I didn't realize until I left the drive through. I came back the next day, I asked for a refund, then asked the cashier to check the parfaits and make sure that there is enough yogurt in them. This turned into an unpleasant exchange with the manager on duty, who insisted that all parfaits are exactly the same, and that it would have been impossible to get a parfait that lacked yogurt, and refused to check any of the existing parfaits for their yogurt content (I still wanted another one...) I then wrote a complaint to McDonalds via email. They sent me a coupon for a free meal, told me they took my complaint seriously, and told me they would talk to the manager at that restaurant. That was about as much effort I wanted to put into a defective $1 purchase. I went back to that McDonalds several weeks later, and noticed the manager wearing a different, what appeared to be, a more formal, uniform, and the cashier also for the first time refused to substitute sausage for egg anymore in the $1 sausage mcmuffin. So I assume that someone talked to the owner and the manager, and among other things, a decision was made to not allow substitutions anymore. Not sure what happened to the parfaits - I stopped buying them. I suspect that my complaint set in motion a number of things. All in all, I stopped frequenting there for breakfast. So it's nice that Digital Ira can carry on my message without any additional effort on my part (how long do things last on the Internet, these days? Forever?), and stand up for the little guy against the corporate multinational.