Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

An Efficient Real time Product Recommendation

using Facial Sentiment Analysis


R. Suguna M. Shyamala Devi Akash Kushwaha
Dept. of Computer Science and Engineering Dept. of Computer Science and Engineering Dept. of Computer Science and Engineering
Vel Tech Rangarajan Dr. Sagunthala R&D Vel Tech Rangarajan Dr. Sagunthala R&D Vel Tech Rangarajan Dr. Sagunthala R&D
Institute of Science and Technology Institute of Science and Technology Institute of Science and Technology
Avadi, Chennai, India Avadi, Chennai, India Avadi, Chennai, India
[email protected] [email protected] [email protected]

Puja Gupta
Dept. of Computer Science and Engineering
Vel Tech Rangarajan Dr. Sagunthala R&D
Institute of Science and Technology
Avadi, Chennai, India
[email protected]

Abstract—Image recognition based on deep learning has classification and feature extraction. The primary step in
made successful attempts in solving Computer Vision image classification is to extract important information by
problems. Deep learning algorithms have primarily omitting the unwanted data. Feature descriptor techniques
contributed to face detection, facial expression recognition, age such as Scale-Invariant Feature Transform (SIFT), Haar-like
and gender determination tasks. The performances of the features, Histogram of Oriented Gradients (HOG), Speeded
algorithms have been evaluated by the researchers and they Up Robust Feature (SURF) and Binary Robust Independent
are made available as cloud services. By accessing the cloud, Elementary Features (BRIEF) are used to mine significant
application developers can incorporate the already tested information from a image. The principle of machine
algorithms in their work and pay for the utilized service.
learning algorithms is to treat these feature vectors as points
Amazon Rekognition is a web service which provides highly
accurate facial analysis and more suitable for facial
in a higher dimensional space. It determines the planes or
recognition. Amazon Rekognition includes a simple, easy-to- surfaces (contours) and separates the higher dimensional
use API that can quickly analyze any image file that’s stored in space in such a way that all examples belonging to a
Amazon S3. This paper explains the necessary steps to utilize particular class lie on one side of the plane or surface. To
the service to build an application for a retail store. The build such predictive model deep learning uses neural
application captures the customer faces, performs facial networks. The neural network is a similar to biological
analysis and recommends appropriate products by displaying network of human brain to estimate output for a given huge
the targeted advertisements. With the facial analysis responses amount of unknown inputs using activation functions.
of recognition algorithm, a decision block is devised to select Suitable learning algorithm should be selected to recognize
the appropriate content to be displayed. The application is the images. Some of the algorithms suggested for image
tested with anonymous images and performance is appreciable. classification in recognizing images are support vector
machines (SVM), neural networks, decision trees, K-nearest
Keywords—facial sentiment analysis, face detection, age & neighbors (KNN) and logistic regression. The image
gender estimation, facial expression analysis, Amazon recognition model is trained with large volumes of training
rekognition images and tested with unknown samples to assess the
performance of the model.
I. INTRODUCTION
Automated facial recognition systems have the
Image recognition is a significant phase in image capability to identify individuals from their facial features
processing that tends to identify objects, people, places, and such as position of nose, eyes and mouth. These
actions in images. Using machine vision technologies in characteristics are further analyzed and compared to stored
combination with a camera and appropriate software identify to authenticate a person.
computers are trained to achieve image recognition.
Facial expression analysis involves measurement and
Image recognition performs a variety of visual tasks, recognition of expression. Automation of facial expression
such as annotating the content of images with meta-tags, prediction consists of face detection, facial feature
searching for image content in a library, controlling extraction and facial expression recognition.
autonomous robots, automating car driving and accident
avoidance systems. Recent applications of image Human beings can recognize the objects by seeing them
recognition include smart photo library building, consumer with less effort. Given an image, making the machines to
targeted advertisement, design of assistive devices for recognize and understand the object is a challenging task.
visually impaired and physically challenged people. To solve this problem, over the year’s computer scientists
have researched in Artificial Intelligence, more specifically
The process of image recognition process involves in machine learning called deep learning. Today deep
gathering and organizing data, and building a predictive learning is used in applications like speech recognition,
model to recognize images. Computer can perceive an image recognition, natural language processing and
image as a raster or vector image. Raster images contain a recommendation systems. Deep learning systems undergo
sequence of pixels with discrete numerical values for colors. huge amount of training over an extended period of time to
Vector images can be considered as set of color-annotated learn and perform the specific task. Image recognition
polygons. Organizing data refers to arranging the data for

978-1-5386-8158-9/19/$31.00©2019IEEE
system works with extensive labeled training images and fusing Discrete Cosines Transform with Local Binary
correlates the labels with images. It iterates with the training Patterns for feature extraction has provided promising
process till the confidence level of the expected output is results in face recognition [20].
satisfactory. Deep learning uses neural networks to learn
and predict with the confidence score of what object it Using Convolutional Neural Network for facial
thinks in the image. expressions recognition has been demonstrated [21].
Emotions of human are learned using deep Convolution
The organization of the paper is done as follows: Section Neural Network (CNN) and the intensity changes on a face
II discusses the various techniques used to build facial during emotion are illustrated [22]. Two facial images:
recognition. Section III presents the application of facial facial gray scale images and their corresponding local binary
recognition in various sectors. Section IV introduces the pattern (LBP) facial images, are taken and processed by
features of Amazon Rekognition system. Section V presents deep neural network. The outputs are fused in a weighted
an application for retail store. manner. Softmax classification has been used for assessing
the result of final recognition [23]. Automatic facial
II. RELATED WORK expressions recognition system based on deep network
framework has been presented [24]. The technique uses
Age of a person can be inferred by observing distinct autoencoders and the SOM-based classifier. Results prove
patterns emerging from the facial appearance. Age synthesis that a better representation of facial expression can be
is the process of rerendering a face image with natural aging achieved using autoencoders.
and introducing rejuvenating effects on the individual face.
Age estimation is done by automatically labeling a face A multimodal 2D+3D facial expression recognition
image with the exact age in years or specifying the age (FER) system with deep fusion convolutional neural
group [1]. This particular age estimation system can be network has been demonstrated with feature extraction
interfaced with camera to warn or deny children access to subnet, a feature fusion subnet, and a softmax layer [25].
adult Web sites or restricted movies]2]. A heuristic Facial feature extraction methods adopted for gender
approach is used to simulate face with creases and aging recognition in the past decades has been surveyed
wrinkles to create texture details on the given face [3], [4]. [26]. Automatic face analysis using 3-D shape
The technique is simple to implement but takes time to representation has been developed [27] and its invariance
generalize the rendering process. Cloning facial expressions to facial expression has been demonstrated. The technique
shows improvement in photorealistic effect. It tries to uses a combination of statistical shape modeling and non-
capture and clone face attributes in target image which can rigid deformation matching.
be used for recognizing facial expressions [5] or aging skin
details [6]. Another approach called Merging Ratio Images III. RECENT APPLICATIONS OF FACIAL RECOGNITION
(MRI) [7], extends the Ratio Image (RI) concept to multiple
face attributes representation. Merge expression RI [8], Majority of facial recognition use-cases appear to fall
illumination RI (quotient image) [9] and agingRI [10] are into three major categories:
also used for photo realistic face rendering.  Security:
Based on face features from neighboring age groups, a Deep learning algorithms are used by the companies to
novel Bi-level Dictionary Learning based Personalized Age identify fraud detection, to avoid the traditional passwords
Progression (BDL-PAP) method has been suggested and
for authentication and to improve the ability to distinguish
formulated to learn the aging dictionaries [11]. An elaborate
between a human face and a photograph.
overview of state-of-the-art advances in biometric
demographic feature analysis has been provided [12].  Healthcare:
An investigation on the impact of facial expressions on To track patient medication consumption and support pain
automatic age estimation has been carried out and a new management procedures, machine learning algorithms are
graphical model has been proposed interlinking the merged with computer vision to perform the tasks
age/expression labels and the extracted features [13]. effectively.
Recently, a faster RCNN [14], has been proposed and
 Marketing:
proved impressive results on various face detection
compared to existing algorithms. The technique focuses on With ethical considerations, marketing is expanding the
image representation extraction from two consecutive domain of facial recognition innovation to study the
convolution layers. Local feature extraction is done through customer’s behavior.
one layer followed by pooling of extracted features in the
second layer [15]. For robust facial expression recognition,
a biologically inspired sparse-deep simultaneous recurrent IV. AMAZON RECOGNITION WEB SERVICE
network (S-DSRN) has been illustrated [16]. A detailed Amazon has released web service on Image Recognition
discussion on various face recognition algorithms and based on deep learning algorithms. They are completely
recommendation of application specific facial recognition integrated with other AWS services, highly secure and
techniques has been provided [17] accurate. It also provides fully managed service supporting
A joint estimation of facial components such as gender, the inclusion of new images for the applications. This
expression and ethnicity have been demonstrated [18]. Self- service provides a simple and easy to use API that can
Organizing Maps and Gabor patterns are used to build quickly analyze any image or video file stored in cloud
efficient face recognition systems [19]. Local Binary storage.
Patterns are used for facial recognition application. Also
Given an image to the Rekognition API, the service can
identify objects, people, text, scenes, and activities. Amazon
Rekognition helps in accurate facial analysis and facial
recognition. The provided API can detect, analyze, and
compare faces which helps in user verification, person
counting and user cataloging.
Amazon Rekognition is based on the same proven,
highly scalable, deep learning technology developed by
Amazon’s computer vision scientists to analyze billions of
images.
Amazon Rekognition can detect faces in images and
stored videos. With this API one can get information about
where faces are detected in an image or video, facial
landmarks such as the position of eyes, nose and mouth. It
has the ability to detect gender, age and emotions such as
happy, sad, or surprise from facial images.
When an image that contains a face as input is provided
to DetectFaces API, Amazon Rekognition detects the face in
the image, analyzes the facial attributes of the face, and then
returns a percentage confidence score for the face and the
Fig. 1 Steps to be performed in AWS
facial attributes detected in the image.
Rekognition can locate faces within an image and using B. Application Development
facial characteristics it can recognize emotions, demographic The application has used the web service of Amazon to
details, facial landmarks and image quality. These features personalize content to the user based on demographic and
help in building application that performs user sentiment sentimental analysis on the user facial image. This
analysis. application is suitable for any retail stores as a part of
Given an image with face/faces DetectFaces API returns marketing strategy. Fig. 2 shows the phases in the
the facial details of the image. It detects the number of faces application development.
in the image and returns the detailing parameters of each face
present in the image.
Table I lists the parameters of facial details returned by
image recognition API.

TABLE I. FACIAL DETAILS CAPTURED BY THE API


Facial Details Parameters
Facial Landmarks X,Y position of Eyes, Nose, Mouth,
Eye Brows Fig. 2 Phases in Facial Sentiment Analysis
Demographic Age Range, Gender Live customer image is captured through a camera, image
Emotions Happy, Sad, Angry, Calm, Surprise, is analyzed to detect faces in the image and facial details are
Confused, Disgusted etc., with extracted by using Image Recognition API. The responses
confidence scores
of API are analyzed by decision logic to dynamically
Other Attributes Face BoundingBox,
display the targeted ads in the screen, further viewed by the
Has beard, mustache, Sunglasses customer.
isSmiling, isMouth-open, isEyesOpen The application process flow is shown in Fig. 3.
Pose information
Image Quality Brightness, Sharpness

V. BUILDING AN APPLICATION USING WEB SERVICE

A. AWS Process
Fig. 1 explains the sequence of steps required to access
the API provided by Amazon. To utilize the service one
should create an account in AWS. Fig. 3 Application Process Flow

Fig. 4 explains the complete flow of the process.


Customer image is captured and stored in the S3 bucket.
Using the lambda function, the captured image is given as
input to DetectFaces API. The API responses with the
analyzed facial details are stored in DynamoTable for future
reference. From the response of API, suitable target ads are  Install aws-sdk
displayed to the customer.  Install http
 Install uuid
 Write the lambda code in index.js
4. Set up lambda function in AWS
 Setup IAM roles for the lambda function
 Create an API in Amazon API Gateway to expose
the lambda function to clients
 Initiate Call DetectFaces API.
Fig. 4 Process Flow Description The API response for sample image is listed in Table II.
The entire steps in building the application is shown in TABLE II. API RESPONSE DETAILS
Fig. 5.
API Call Respose
detectFaces API Call Response:

{
"FaceDetails": [
{
"AgeRange": {
"High": 38,
"Low": 23
},
"Beard": {
"Confidence": 97.11119842529297,
"Value": false
},
Fig. 5 Overall Architecture of the application "BoundingBox": {
"Height": 0.42500001192092896,
"Left": 0.1433333307504654,
1. Create s3 bucket: "Top": 0.11666666716337204,
Amazon Web Services (AWS) provides an object "Width": 0.2822222113609314
},
storage service named as bucket. A bucket is a logical unit "Confidence": 99.8899917602539,
of storage in object in Simple Storage Solution S3. Buckets "Emotions": [
are used to store objects containing data with metadata {
describing the data. "Confidence": 93.29251861572266,
"Type": "HAPPY"
 Using the credentials of Amazon Web Service access },
the storage S3 and create two buckets, one for storing {
the targeted ads and other for storing the captured "Confidence": 28.57428741455078,
"Type": "CALM"
customer image. },
 Also set CORS configurations of S3 bucket. Cross- {
"Confidence": 1.4989674091339111,
origin resource sharing (CORS) allows interacting with "Type": "ANGRY"
resources in a different domain and describes a way for }
client web applications to initiate such interaction. ],
Amazon S3 with CORS support helps us to build rich "Eyeglasses": {
client-side web applications and facilitates the access to "Confidence": 99.99998474121094,
"Value": true
Amazon S3 resources. },
"EyesOpen": {
2. Create DynamoDB Table "Confidence": 96.2729721069336,
"Value": true
This table is used to store the responses of DetectFaces },
API and provides analytic reports on customer demographic "Gender": {
and sentiments. Amazon.com offers a fully managed "Confidence": 100,
proprietary NoSQL database service, DynamoDB as part of "Value": "Female"
},
the Amazon Web Services. Create a DynamoDB table by "Landmarks": [
selecting it from Database of Amazon Services. Name the {
table and set its primary key. "Type": "eyeLeft",
"X": 0.23941855132579803,
"Y": 0.2918034493923187
3. Create a lambda function },
Code a lambda function in node.js that will process the {
"Type": "eyeRight",
image using Amazon Rekognition and return the appropriate
"X": 0.3292391300201416,
ad content based on facial details analyzed. "Y": 0.27594369649887085
},
 Create json package
{ 5. Save the responses of API call in the DynamoDB Table.
"Type": "mouthLeft", This storage is activated in the lambda code.
"X": 0.24296623468399048,
"Y": 0.4368993043899536 6. The decision logic written in the lambda code gets
}, activated by the API responses. Based on the decision logic
{ recommendation, target image is retrieved from S3 bucket.
"Type": "mouthRight", 7. The target image is pushed for display in the screen.
"X": 0.32943305373191833,
"Y": 0.42591965198516846
}, Creation of webpage to run the application
{  Code an html client that uploads the image to bucket,
"Type": "leftEyeUp", calls the API fetching the ads from the bucket.
"X": 0.23798644542694092,
"Y": 0.28594088554382324  Create identity for unauthenticated access to AWS
}, resources from browser script.
{
"Type": "leftEyeDown", VI EXPERIMENTAL RESULTS
"X": 0.2404623031616211,
"Y": 0.29718098044395447 To test the application, few target images are uploaded in
}, target-ad S3 bucket. The application is run by executing the
{ created webpage. For experimental purpose, customer
"Type": "rightEyeUp",
images are captured through webcam. The demographic
"X": 0.3284870386123657,
"Y": 0.27036795020103455 details of the customer are retrieved by calling the API . The
}, results of demographic details are processed to make the
{ target ad decision. First the confidence score is checked to
"Type": "rightEyeDown",
confirm the presence of face image. If human face is
"X": 0.32978174090385437,
"Y": 0.2812310755252838 detected, then Age, Emotion, Gender, presence of
}, Moustache, Sunglasses are used to decide the target image
{ to be presented to the customer. Some of the decision logic
"Type": "mouthUp",
applied in the proposed work is listed in Table III. The
"X": 0.2924160361289978,
"Y": 0.407451868057251 decision logic can be customized on the need of the shop.
},
{ TABLE III. DECISION LOGIC OF TARGET IMAGE
"Type": "mouthDown",
"X": 0.29673251509666443, Demographic Value Inference Target Image
"Y": 0.46654582023620605 Parameter
} face confidence Less than 80% No human Default Image
], face of the Shop
"MouthOpen": { face confidence More than 80% Kid Ice cream
"Confidence": 72.5211181640625,
"Value": true Age.Range 5-15
}, face confidence More than 80% Lady Cosmetic
"Mustache": {
Age.Range 15-40
"Confidence": 77.63107299804688,
"Value": false Gender Female
}, Age.Range 15-35 Men Trimmer
"Pose": {
Gender Male
"Pitch": 8.250975608825684,
"Roll": -8.29802131652832, Beard Confidence More than 80%
"Yaw": 14.244261741638184 Age.Range 15-30 Men/Women Yoga DVD
},
Gender Male/Female
"Quality": {
"Brightness": 46.14684295654297, Emotions Confused
"Sharpness": 99.9945297241211 The proposed approach is experimented with 50 customers
},
"Smile": { and was appreciated. Around 42 customers agreed with the
"Confidence": 99.47274780273438, response images and hence the accuracy rate is 84%.
"Value": true Sample customer images and the results of response images
}, are shown in Fig. 6.
"Sunglasses": {
"Confidence": 97.63555145263672,
"Value": true Customer Image Target Ad Image
}
}
]
}

The confidence score confirms the presence/absence of


facial feature in the image. High confidence scores confirm
the presence of the particular facial feature.
Trans. Pattern Analysis and Machine Intelligence, vol. 23(2), pp. 129-
139, 2001.
[10] Y. Fu “Photorealistic Face Rendering Based on a Fusion Model of
Linear and Lambertian Object Class,” Master’s thesis, AI&R, Xi’an
Jiaotong Univ., 2004.
[11] X. Shu, J. Tang, Z. Li, H. Lai, L. Zhang and S. Yan, “Personalized
Age Progression with Bi-Level Aging Dictionary Learning,” IEEE
Transactions on Pattern Analysis & Machine Intelligence. vol. 40(4),
pp. 905-917, 2018.
[12] Y. Sun, M. Zhang, Z. Sun and T. Tan, “Demographic Analysis from
Biometric Data: Achievements, Challenges, and New Frontiers,”
Fig. 6 Results of the application IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.
40(2), pp. 332-351, 2018.
[13] Z. Lou, F. Alnajar, J. M. Alvarez, N. Hu and T. Gevers, “Expression-
VI. CONCLUSION Invariant Age Estimation Using Structured Learning,” IEEE
Transactions on Pattern Analysis & Machine Intelligence, vol. 40(2),
Face recognition technique has been traditionally pp. 365-375, 2018.
associated with security sector, but today its need has [14] H. Jiang and E. Learned-Miller, “Face Detection with the Faster R-
expanded in retail, marketing and health. Automated face CNN,” 12th IEEE International Conference on Automatic Face &
recognition system has gained focus in biometric research. Gesture Recognition (FG 2017)(FG), Washington, DC, DC, USA,
pp. 650-657, 2017.
Various algorithms and methods have been suggested for [15] L. Liu, C. Shen and A. V. Hengel, “Cross-Convolutional-Layer
effective face recognition task. This paper analyzes the Pooling for Image Recognition,” IEEE Transactions on Pattern
effectiveness of existing deep learning based face recognition Analysis & Machine Intelligence, vol. 39(11), pp. 2305-2313, 2017.
algorithm which is provided as an abstraction. The algorithm [16] M. Alam, L. S. Vidyaratne and K. M. Iftekharuddin, “Sparse
is provided as cloud service and user is allowed to apply in Simultaneous Recurrent Deep Learning for Robust Facial Expression
Recognition,” IEEE Transactions on Neural Networks and Learning
their application by calling appropriate API in their code. Systems, vol. 29, pp. 4905 - 4916, 2018.
Amazon Rekognition service provides API for recognizing [17] D. Sadhya, A. Gautam and S. K. Singh, “Performance comparison of
facial details of the given image. The entire process involved some face recognition algorithms on multi-covariate facial
in accessing this service is also explained in this paper. As an databases,” Fourth International Conference on Image Information
application, this service is utilized to display suitable ads to Processing (ICIIP), Shimla, India, pp. 1-5, 2017.
the customers in a retail store. The performances of the [18] S. Venkatraman, S. Balasubramanian and D. Gera, “Multiple face-
system are examined by testing with images captured in real component analysis: A unified approach towards facial recognition
tasks,” 2nd International Conference on Man and Machine Interfacing
time and responses are precise. The decision logic can be (MAMI), Bhubaneswar, India, pp. 1-6, 2017.
improved by surveying with more customers in a [19] K. V. Arya, G. Upadhyay, S. Upadhyay, S. Tiwari and P. Sharma,
departmental store. “Facial recognition using histogram of Gabor phase patterns and self
organizing maps,” 11th International Conference on Industrial and
Information Systems (ICIIS), Roorkee, 2016, 883-889.
VI. ACKNOWLEDGEMENT [20] E. R. Buhuş, L. Grama and C. Şerbu, “A facial recognition
I would like to thank and appreciate Amazon’s computer application based on incremental supervised learning,” 13th IEEE
International Conference on Intelligent Computer Communication
vision scientists for developing a scalable facial recognition and Processing (ICCP), Cluj-Napoca, pp. 279-286, 2017.
system which provides accurate facial analysis. I express my [21] A. Fathallah, L. Abdi and A. Douik, “Facial Expression Recognition
sincere gratitude to Mr. Dhiman Halder, Solution Architect via Deep Learning,” IEEE/ACS 14th International Conference on
for guiding me in building the application. Computer Systems and Applications (AICCSA), Hammamet, Tunisia,
pp. 745-750, 2017.
[22] G.A. Kumar, R.K. Kumar and G. Sanyal, “Facial emotion analysis
REFERENCES using deep convolution neural network,” International Conference on
Signal Processing and Communication (ICSPC), pp. 369-374, 2017.
[1] Y. Fu, G. Guo, and T. S. Huang, “Age Synthesis and Estimation via [23] B. Yang, J. Cao, R. Ni and Y. Zhang, “Facial Expression Recognition
Faces: A Survey,” IEEE Transactions on Pattern Analysis and Using Weighted Mixture Deep Neural Network Based on Double-
Machine Intelligence, vol. 32 no. 11, pp. 1955-1976, 2010. Channel Facial Images,” IEEE Access, vol. 6, pp. 4630-4640, 2018.
[2] G. Guo, Y. Fu, C. Dyer, and T.S. Huang, “Image-Based Human Age [24] A. Majumder, L. Behera and V. K. Subramanian, “Automatic Facial
Estimation by Manifold Learning and Locally Adjusted Robust Expression Recognition System Using Deep Network-Based Data
Regression,” IEEE Trans. Image Processing, vol. 17, pp. 1178-1188, Fusion,” IEEE Transactions on Cybernetics, vol. 48, pp. 103-114,
2008. 2018.
[3] L. Boissieux, G. Kiss, N.M.Thalmann and P. Kalra “Simulation of [25] H. Li, J. Sun, Z. Xu, and L. Chen, “Multimodal 2D+3D Facial
Skin Aging and Wrinkles with Cosmetics Insight,” Proc. Expression Recognition With Deep Fusion Convolutional Neural
Eurographics Workshop Animation and Simulation, pp. 15-27, 2000. Network,” IEEE Transactions on Multimedia, vol. 19(12), pp. 2816-
[4] T. Kuratate and T. Nishita, “A Simple Method for Modeling 2831, 2017.
Wrinkles on Human Skin,” Proc. Pacific Conf. Computer Graphics [26] N. Choon-Boon, T. Yong-Haur, G. Bok-Min, “A review of facial
and Applications, pp. 166-175, 2002. gender recognition,” Pattern Analysis and Applications, vol. 18,
[5] H. Pyun, Y. Kim, W.H. Chae and S. Shin, “An Example Based pp.739-755, 2015.
Approach for Cloning Facial Expressions,” Proc. 2003 ACM [27] W. Quan, B.J. Matuszewski, and L.K. Shark, “Statistical shape
SIGGRAPH/Eurographics, pp. 167-176, 2003. modelling for expression-invariant face analysis and recognition,”
[6] Y. Shan, Z. Liu, and Z. Zhang, “Image-Based Surface Detail Pattern Anal Applications, vol. 19(3), pp. 765–781, 2016.
Transfer,” Proc. IEEE Conf. Computer Vision and Pattern
Recognition, pp. 794-799, 2001.
[7] Y. Fu, “Merging Ratio Images Based Realistic Object Class
ReRendering,” Proc. IEEE Conf. Image Processing, pp. 3523-3526,
2004.
[8] Z. Liu, Y. Shan, and Z. Zhang, “Expressive Expression Mapping with
Ratio Images,” Proc. ACM SIGGRAPH, pp. 271-276, 2001.
[9] A. Shashua and T. Riklin-Raviv, “The Quotient Image: ClassBased
Re-Rendering and Recognition with Varying Illuminations,” IEEE

You might also like