ABOUT

What is the Think Face Project? Basic researches into the Nigaoe engine

What does it mean for a computer to draw a Nigaoe?

The development of the Nigaoe engine began with researches into the efficient transmission of facial video images by video teleconferencing and picturephone. This research involved the use of face photos and three-dimensional model of the head to establish a theory for sending and reconstructing facial videos using limited information. During the process of pursuing technological research into methods of extracting and analyzing facial features from photographs and research into facial recognition, cognition, and facial similarities, a system was produced as the basis for the “Nigaoe engine.” Rather than converting a photograph into a Nigaoe through image processing software, this system analyzes an input facial image to extract and quantify features that comprise the distinct characteristics of a face. Based on this numerical data, the computer reproduces the distinct facial contours and the facial parts to draw a Nigaoe.
A digital Nigaoe based on numerical data can be freely arranged to show different emotions such as happiness, anger, fear, sadness and surprise, animated to talk or sing, or portrayed in a certain style by changing the style of the Nigaoe, and possesses new possibilities that will redefine the existing “Nigaoe” image.

Draw a Nigaoe = Numerical representation of the face

The unique aspect about the Nigaoe engine is that it can quantify the features of a face during the process of drawing a Nigaoe. The numerical data or score representing the features of a person’s face are unique — even identical twins do not have perfectly identical faces. If the accuracy improves in the future, it can be applied to face authentication, among others. Until now, judging whether a face resembles or does not resemble another face was based on someone’s subjective impression, but by using the numerical scores of face, similarity or difference can be compared quantitatively. Wide-ranging applications are expected in the area of human  computer communication in the AI age.

Degree of similarity among various shapes of facial contours

Similarity
Ranking12345678910
Combination7+101+23+44+93+95+102+81+61+85+7
Degree of similarity0.990.960.840.840.780.750.730.730.660.64

What is understood from the human face? What is understood from the human face?

The human face is a repository of information

Since the dawn of time, people have consciously or unconsciously focused on the human face as a matter of survival. People look at other people’s faces to identify individuals, perceive someone’s feelings and emotional state, or read someone’s intentions from their gaze, blinking, mouth movements, or utterances. The face is a source of information that is indispensable in smooth communication. Even in academic circles, the face is the subject of research in various fields of the natural sciences such as zoology, medicine, psychology, and engineering, as well as sociology and the fine arts. Presently, the society for facial studies has been established that cuts across different fields of research. The results of their research are being applied to technological development and business in various circles.

The application of “face” technology

Research relating to the “face” is being applied even in the area of technology.
A technological approach to measurement and analysis of faces and digitization technology are being applied that will have significance for future AI, VR, and robot development. They include the facial-detection function of digital cameras, image processing in purikura (print club photo booth pictures), immigration control at airports, facial authentication applied in room access control systems and smartphones, and the facial expressions and movements of CG and game characters.

What is understood from the human face? Computer × Nigaoe logic

How does a computer capture facial features and draw a Nigaoe?

How does the computer grasp the shape of facial contour?

It recognize the shape of facial contour by the combination of element shapes.
The computer recognizes the shape of facial contour by the combination of geometrical element shapes. The computer analyzes what combination of element shapes comprises the face, the ratios of these element shapes, and converts them into a mathematical expression to perform processing.

How does the computer determine the shape of face, and the positions of eyes, nose, and mouth?

It detects these features using points called “feature points”.
The features of the human face are recognized according to the shape of facial contour (large/small, aspect ratio, puffiness in breadth direction, length of chin, triangle or inverse triangle, asymmetry of the right and left sides of the face, etc.), the shapes of parts such as the eyebrows, eyes, nose, and mouth, and the relative positioning of the parts such as length of the part between the nose and the mouth, and distance between the eyebrows and eyes. These facial features are detected using 156 points called “feature points.” The group of feature points obtained by inputted actual photo of the face is compared with a group of feature points for an average face. The compared results are divided into shape and position data, and principal component analysis is performed on each data.

Automatic extraction result of “feature points.”

How is similarity and dissimilarity determined?

Similarity is determined by quantifying the differences with an “average face”.
Standard values are needed for a computer to perform discrimination. Therefore, we collected the faces of 300 men and women between the age of 10 and 60 and created an “average face” by averaging the extracted feature points from them. To draw a Nigaoe, the computer derives scores by comparing the level of divergence of the input facial image from the standard values of the average face. The Nigaoe is finished by painting colors onto the drawn lines determined from the tone and tint of input face.

Average face
Automatic extraction result of “feature points.”
Average face

Automatic generation of Nigaoe/illustration

Confirm automatically analyzed faces at a single glance

To enable the Nigaoe engine to compare feature scores or make manual interactive adjustments, we are building an interface tool for an interactive operating system. Principal component analysis for an input facial image is carried out automatically, and the system is designed so that the analyzed results can be confirmed at a single glance.

What is understood from the human face? Evolving functions of the Nigaoe engine

Improving the quality of Nigaoe

Various studies into the Nigaoe engine are being pursued including the improvement of its accuracy and artistry, as well as wide-ranging applications utilizing facial data.

Drawing a Nigaoe based on verbal description

The interactive operating system can be used to draw “the envisaged face” by starting with the “average face” and adding features to it such as “slant eyes” or “large bright eyes” or by emphasizing subjective verbal impressions such as “kind impression” or “stern impression.” This allows you to draw a Nigaoe of someone in cases when there is no original photograph.

Changing the drawing style

A Nigaoe is naturally drawn with each artist’s style.
We are presently developing technology to draw Nigaoe of “a specific style” by adding exaggerations to a base Nigaoe.
This will allow us to deform or arrange a Nigaoe in the style of a shojo manga (a girls’ comic), Indian ink painting, ukiyo-e, or so that it reflects a specific artist’s style.

Improving the accuracy of the average face

While our existing average face employs an average model of 300 Japanese people, by increasing the population in terms of diversity or size, or changing attributes according to need, we can derive the requisite average face and draw a high accuracy Nigaoe suited to individual purpose.

Various types of retrieval of similar faces

Retrieval of faces based on an objective “similar or dissimilar” judgment can be performed that would be difficult to make with actual photographs. This can also be applied in the entertainment field, such as to check how much you resemble a celebrity, cast actors according to your image for the role, or check the degree to which your face fits into a certain stereotype such as a “horse-like” face.
Based on differences in how the top portion of the face develops compared to the bottom portion, we can estimate the development of a child’s face into an adult’s face, enabling the search of the pair of parent and child from multiple faces.