Since 2021, when Epic Games, the developer of the Unreal Engine, released the MetaHuman tool for creating realistic human models, FrostBit software lab has been following with interest the development of human models and their use. To date, MetaHuman has been used or considered for use in several FrostBit projects to support the workflow of modelers. Over the past year, our character creation process has gained new tools through the Virtual Production Studio Technologies project (VPST), which enabled the acquisition of equipment for the construction of a virtual production studio. New technologies for our laboratory included the OptiTrack motion capture system for the implementation of character animations and the Marvelous Designer clothing tool for creating realistic attire for human models. In this article, I will discuss the Virtual Elf demo carried out as part of the VPST project, the realism of MetaHuman characters, and the challenges of character creation. In addition, I will delve into the advantages that programs like MetaHuman and Marvelous Designer offer professionals in the gaming industry.
Realism in Project Objectives
3D modeling and its technologies have long evolved towards the pursuit of realism. Different modeling programs aim to simulate physics better and more accurately than before and modern rendering techniques enable the creation of highly realistic worlds. At FrostBit, we advance alongside emerging technologies, and creating human-like characters is one area that has seen significant progress in recent years. Naturally, our laboratory experts are therefore actively interested in testing and criticizing the tools as they become part of the broader discussion.
FrostBit has long required various human models, and this year, the need for human-like models has come to the forefront in several projects. Among our ongoing projects, VPST, VR Speech Simulation Game and Xstory have all needed human-like designs in some form or another. Each of these projects has explored using MetaHuman for character creation. MetaHuman was also used in previous years, such as the CultureExpert project, which ended in 2023 and implemented a simulation of interaction encounters with the educational situation of health and social services personnel. MetaHuman was first employed at FrostBit in 2021, when the Migael project tested creating several human models using the newly released tool.
In the VPST project, a human-like character was needed for the interactive Virtual Elf demo that would take place within the Christmas House in Santa Claus Village. The character was to react to the visitors’ actions, as if it were one of Santa’s elves. Therefore, the need for a realistic human figure was clear. Simulations and games developed at FrostBit that require human characters often focus on simulating interactive encounters. For example, in the Virtual Elf simulation, the “player”, i.e. a passing person, is in interactive contact with a virtual character. In these situations, the character is often wanted to be as approachable and natural as possible. A common thread across many of our projects is the need to implement a human model that can participate in some form of interactive encounter. Another recurring requirement is for these models to be created quickly. Epic Games’ MetaHuman tool meets these needs: it provides a solution for situations requiring a high-quality and detailed human model in a short timeframe.
The Uncanny Valley Theory in the Acceptance of Realistic Characters
MetaHuman is a browser-based tool launched by Epic Games in 2021, marketed as a tool for creating realistic human characters quickly. FrostBit’s specialist Jere Jalonen (2023) examined the creation of realistic-looking 3D characters in his thesis, focusing on the factors developers should consider when designing them. In his work, he refers to the uncanny valley effect, which some researchers view as a spectrum for how unsettling or familiar we perceive human-like characters to be.
Oddey and White (2009, 31-32) discuss an article originally published in 1978 by Japanese robotics researcher Masahiro Mori, in which Mori describes the “uncanny valley” effect as a dip in the level of familiarity, or in other words, the degree of human acceptance toward human-like creations. Mori argued that the closer human-like robots come, the more people like them, but only to a point until they end up in the “uncanny valley” where people find things repulsive and disgusting.
According to Higgins, Egan, Fribourg, Cowan, and McDonnell (2021, 2), Mori’s uncanny valley theory has been a significant topic in scientific discussions concerning robots and virtual humans throughout the 21st century. They refer to Draude’s (2011) publication on virtual humans, which describes the uncanny valley effect as beginning at the point where people start to experience negative feelings toward a virtual human or robot and ending when realism and positive effects outweigh the negative. In other words, the uncanny valley effect and associated negative emotions can be seen as diminishing once the entity’s human resemblance reaches a certain critical point. This theory implies that there is an upper threshold where both familiarity and realism are optimized (Higgins et al. 2021, 2).
However, Draude (2011, 2) also urges a critical perspective on the uncanny valley theory, as it may be considered unscientific and controversial. Nonetheless, she states that even if it isn’t precisely this theory that is regarded, the phenomenon of strangeness plays an important role in the design of artificial beings. It serves as a focal point for the reception and acceptance of artificial characters in an effort to design a highly realistic appearance.
Although Mori’s theory has often been viewed as lacking evidence, increasing evidence of the uncanny valley effect has emerged in studies examining the impact of realism on perceptions of virtual characters (Higgins et al. 2021, 2). Jerald (2016, 50) describes the uncanny valley theory as controversial, characterizing it more as a simple explanatory model than one supported by scientific evidence. However, he notes that this straightforward concept has value, as it helps us consider how to design and give shape to a virtual character.
Oddey and White (2009, 33) note that 3D modeling is making significant strides in replicating photorealism, as software development focuses on targeted simulation using technologies such as ray tracing, caustics (reflected lighting), subsurface scattering, and dynamics. These advancements, along with other developments in 3D technology, are taking realism simulation capabilities to an entirely new level. Today, Epic Games’ MetaHuman tool stands as one of the outcomes of these advancements. Over the past decade, rendering techniques have evolved rapidly, with ongoing efforts to make human-like characters increasingly realistic.
The Human Likeness of MetaHuman
In MetaHuman’s release year of 2021, Higgins and colleagues at Trinity College Dublin conducted a study on the importance of the level of detail (LOD)—the degree of detail rendered in a character—in the perception of virtual humans created with MetaHuman. As the LOD changes, the geometry of the character shifts, increasing polygons and thereby adding more detail to the character. They compared several human models, selecting LOD4, among the possible eight LODs (0–7), as the reference point for lower detail. LOD4 represented high-quality, human-like rendering prior to the advancement of tools like MetaHuman. For comparison, they used MetaHuman’s highest-quality LOD0. The differences between these two levels were substantial; for instance, the number of vertices defining facial detail was tens of thousands higher in the newer model. As a conclusion to their study, they suggested that current “ultra-virtual” models like MetaHuman, might no longer be stuck in the uncanny valley, as test participants rated them as more appealing, more human-like, and less eerie compared to lower-quality versions of the same character. (Higgins et al. 2021, 4-5.)
It’s interesting that the less detailed LOD version of MetaHuman used in the study, once considered the pinnacle of technology, is now rated as less human due to its lack of detail. MetaHuman was launched in April 2021, and the study was conducted in September of the same year. Since then, Epic Games has released several updates to MetaHuman, such as the “Mesh to MetaHuman” tool update introduced in 2022, allowing for the creation of MetaHuman versions based on photogrammetry or custom-modeled faces. With this development, future versions of MetaHuman are likely to approach an even higher level of human likeness with the possibilities of creating more specific details.
The creation of the Virtual Elf model highlighted MetaHuman’s ease of use and precise details, such as skin textures and facial features. MetaHuman allows users to import pre-made or custom face models into the tool and blend them, enabling quick adjustments to achieve realistic changes in facial appearance. For example, if a character is wanted to have broad facial features but a narrow nose, the tool can be used to select human models whose features correspond to certain parts of the face and combine them with each other. Custom-made face models, like those based on a scanned human face, can also be imported, allowing MetaHuman to generate facial features based on the model. However, other details of the character, such as skin texture and color, can only be modified using the options available within MetaHuman. This means that, at least for the time being, it’s not possible to create an exact replica of a human face but rather a very similar version.
Although MetaHuman performed excellently in creating realistic human features during the workflow, its current version still has significant limitations, such as the lack of natural hair and diverse clothing options. Epic Games may expand and enhance these features within the application in the future, but at present, MetaHuman requires a separate clothing tool if the model needs even moderately detailed garments. In addition, there are limitations in creating a MetaHuman body, as the tool provides only a few body shapes and height options.
Despite these limitations, MetaHuman is at the cutting edge of current technology for a free and publicly accessible application, making it possible for anyone interested in game development to quickly create a visually impressive human model base. MetaHuman is therefore likely to meet most visual needs for human models, unless the goal is to create a unique digital twin, complete with specific body shapes.
Clothing Tool Marvelous Designer
With FrostBit’s growing need for human models, the demand for character clothing is also emerging. In creating the Virtual Elf, clothing was one of the most important elements in establishing the character’s visual appeal, ensuring the elf would fit seamlessly into its future environment. To support the development of this model and future ones, we acquired the Marvelous Designer software license for our laboratory.
Marvelous Designer is a simulation program aimed at game developers and designers, promoting itself as a powerful tool for creating realistic clothing. Achieving a realistic look for models requires the accurate simulation of garments, and using existing modeling programs like Blender for clothing design can be labor-intensive, making it time-consuming to create both human models and detailed outfits.
Särmäkari (2021, 5-6), who has studied digital fashion, explains how virtual presentations and 3D design are not new in the fields of visual effects, industrial design, animation, and game design. The increased use of tools seen today is more closely related to technological advancements; specifically, how previous simulations primarily served hard materials and their simulations. Today, computing power and algorithmic methods have enabled better simulation techniques for soft materials, real-time clothing animations, and hyper realistic 3D fabric draping.
The development of simulation technology was immediately noticeable when experimenting with Marvelous Designer. The starting point for the elf’s clothes was the modeling of detailed and pleated clothes, which, for example, with the Blender modeling software we had at our disposal would have been considerably more laborious than in Marvelous Designer.
Marvelous Designer functions like traditional clothing pattern making: on top of the selected base model, you can directly begin designing and sewing the necessary garments, and the program simulates the fitting of the clothes to that specific model. This greatly speeds up the modeling process, as it is really easy to make quick adjustments and modify individual details. For example, to shorten or narrow a single sleeve, you only need to make a change to the 2D pattern and sewing; after that, the program simulates the changes in just a few seconds.
Marvelous Designer proved to be an especially handy tool due to its high-quality simulation of fabric physics. While simulation is also possible in other programs, Marvelous Designer combines simulation and editing tools in a way that makes it very user-friendly, as long as the user understands the basics of clothing patterning and piece assembly. The software comes with built-in basic physics for several fabrics, allowing for quick adjustments to fabric behavior with just a few selections, such as switching between silk and wool. Although the final simulation of garments is typically best executed directly within the game engine to achieve dynamic and responsive interactions with other elements, Marvelous Designer is an excellent tool during the creation phase for simulating garment movement and making necessary adjustments. The program also allows users to import pre-made animations, which help visualize how the fabric moves on the character. Additionally, it can simulate factors like wind effects on fabric movement.
While working with Marvelous Designer, it quickly became clear why the software is considered the industry standard: it is based on real garment design methods, such as pattern making, fabric cutting, draping, and seam construction. The modeler can manipulate and adjust the fabric on the character as if in the real world, adding appropriate physical properties and materials in just a few seconds. For example, the Virtual Elf’s hat, which featured intricate details, the sewing together of multiple fabrics, and draping, was easy to implement using the clothing program.
Creation of a Realistic Character – Not Just a Model, but a Multifaceted Development Process
Our specialist Jere Jalonen (2023) discussed the development of realistic characters in his thesis. He emphasized that the experience of a character’s humanity is not solely dependent on their appearance but also on their voice, facial animations, and body movements (2023, 21-23). The greatest advantage of MetaHuman lies in its ease of use and speed: anyone can produce a visually appealing model with the tool. But the development process to make the model realistic does not stop there. Next, animations that are natural to the body of the model, animations of the face, and, if necessary, a suitable voice for the character, must be implemented in order for the illusion of a “real person” to work.
In the case of the Virtual Elf, once the model was completed, the challenges posed by these elements were addressed. The VPST project brought an Optitrack motion capture system to our laboratory, which we used to capture the animations of the elf. During the process, we noticed how even the smallest movements and gestures were important in humanizing the character. For example, the way the character’s posture changed during different animations or how the elf’s shoulders moved while breathing had a significant impact on making the elf appear more human-like. People engage in many micro-movements even when they are stationary, so we aimed to infuse as much liveliness into the elf as possible, even while simply sitting in its workshop.
We created dozens of animations and idle animations for the Virtual Elf, aiming to develop a character that was as human-like and lively as possible in its environment. During this process, Jalonen’s (2023, 22-23) observations about how even the smallest detail can break the illusion proved to be true. While we succeeded in making the elf’s movements human-like, synchronizing the facial animations with the finished body animations created an unnatural discord between the face and the body, which immediately felt inhuman. Creating perfectly natural movements and gestures for a character requires a significant amount of work to ensure that the body, face, and micro-movements function seamlessly together.
With the Virtual Elf, we realized how important it would be to use the same actor throughout all stages of character creation in order to create a fully realistic human character. Ideally, the same person would serve as the model reference, motion capture actor, voice actor, and the basis for facial animations. If different people are used at different stages, it is likely that inconsistencies will arise between the various aspects of the character’s physicality, which can lead to awkward and unrealistic situations.
Oddey and White (2009, 31-32) describe people as being really critical of photorealistic things. They argue that people’s brains latch onto and magnify even the smallest imperfections, such as soulless eyes or stiff lips. From the Virtual Elf’s face, it was evident how facial animations recorded for the MetaHuman can produce expressions that appear unnatural due to the positioning of the mouth and eyebrows. Draude (2011, 7) describes the results of Morin’s research, which indicates that humanoid characters fall into the uncanny valley when they achieve a high degree of human likeness but still exhibit some flaws. Although today’s MetaHuman characters could be considered so realistic that they no longer fall into the uncanny valley, unnatural movement can still evoke discomfort. For example, a character that doesn’t fully represent a specific individual but combines traits, such as movements or facial expressions, from multiple people can seem contradictory. While our positive feelings toward simulated characters representing real individuals increase as we approach realism, this is only true up to a certain point. If we approach reality without fully achieving it, some of our reactions shift from empathy to aversion. (Jerald 2016, 49.)
Ultimately, we achieved our goal with the Virtual Elf to create a character that lives as naturally as possible in its virtual environment and responds to hand signs with signing. As with all human models, the elf could have been refined endlessly, and animations could have been done over and over, but there would likely always be some detail that would break the illusion of a real human. However, in the objectives of the VPST project, the elf has succeeded in its task if, for even a moment, one spectator thinks the elf’s world is a window into the elf’s real workshop. The Virtual Elf will be on display in the Santa Claus Village Christmas House in the autumn of 2024 for user testing and it can be visited during the opening hours of the Christmas House.
Realism with Limitations
MetaHuman and Marvelous Designer offer significant acceleration in the modeling process, but they require considerable additional work and time investment to unlock the full potential of the models created with them. Programs function as tools within the modeling workflow, and although they are powerful, they are not the only path to creating successful characters. When the goal is to craft characters that evoke positive emotions, “hyper-realistic” human models represent just one possible approach. For this style to be effective, the character must not only look realistic but also move, gesture and speak naturally, like a real person.
Higgins et al. (2021, 2) refer to studies showing that stylized, cartoon-like character renderings are often rated more favorably than more realistic but sickly-looking counterparts. In character design, it’s essential to test character versions with the target audience to ensure that the character evokes the desired positive emotions in its intended context. Jerald (2016, 257) also notes that cartoon-like caricature characters can be effective and appealing, especially as avatars, as they can entirely avoid. Like all tools, MetaHuman as a spearhead tool for human models has its own weaknesses. Jalonen (2023, 34) points out in his reflection that MetaHuman’s limitations outside the program are considerable: characters cannot be transferred to programs that are not supported by Epic Games, and modification possibilities to characters outside the program are very restricted. He also notes the challenges posed by the limited number of body types in MetaHuman for creating digital twins and how MetaHuman’s face can’t be taken to other modeling programs other than Auto-desk Maya without them being destroyed. In the case of the Virtual Elf, the constraints were related to the character’s body types, hair, and the difficulties in modifying the MetaHuman model. Such challenges might eventually be addressed by advancements in character creation software, including competing programs that could offer more flexibility and customization options. In our laboratory, we are eager to observe how tools for creating realistic human characters develop, and how much closer current technology and future innovations will bring us to replicating a realistic human.
The article is written in the VPST-project which website you can view here. The VPST-project implementation time is 1.8.2023 – 31.12.2025. The total budget of the project is €1,231,143 and the investment project running alongside it is €478,573. The project is financed by the European Regional Development Fund (ERDF).
Reference Literature
Jerald J. (2016). The VR Book: Human-Centered Design for Virtual Reality. San Rafael: Morgan & Claypool
Oddey, A. & White, C. (2009). Modes of Spectating. Chicago: Intellect
Publications
Draude, C. (2011). Intermediaries: reflections on virtual humans, gender,
and the Uncanny Valley. ResearchGate. Publication. Referenced 4.10.2024 https://tinyurl.com/yzhby2nk
Higgins, D., Egan, D., Fribourg, R., Cowan, B., McDonnell, R. (2021). Ascending from the valley: Can state-of-the-art photorealism avoid the uncanny? Association for Computing Machinery. A case study. Referenced 4.10.2024 https://dl.acm.org/doi/abs/10.1145/3474451.3476242
Särmäkari, N. (2021). Digital 3D Fashion Designers: Cases of Atatac and the Fabricant. Article. Aalto Department of Design. Referenced 4.10.2024 https://aaltodoc.aalto.fi/items/008e7b1f-961e-45c3-933b-6c7870aee334
Dissertations
Jalonen, J. (2023). Realistisen hahmon kehittäminen kulttuuriosaaja simulaatioon. Thesis. Lapland University of Applied Sciences. Referenced 4.10.2024
The article publications are written by the professionals at FrostBit, related to the activities and results of the projects, as well as on other topics related to RDI activities and the ICT sector. The articles are evaluated by FrostBit’s publishing committee.
Henna Huotarinen
Henna works as a specialist focusing on 2D and 3D graphics. Her expertise encompasses 3D modeling, visual identities, user interface design and website design. As a master’s graduate in graphic design, Henna focuses on combining visual quality and user-friendliness in all her projects.