Music Recommendation System Based On Facial Expression
Music Recommendation System Based On Facial Expression
Expression
Dr.S.L. Jany Shabu1 Chintala Janaardhan2, Kodhanda Dr.A. Viji Amutha M ary 4,
Associate Professor,Department of Bhaskar3, Students, Department of Professor, Department of CSE,
Computer Science and Engineering CSE, Sathyabama Institute of Sathyabama Institute of Science &
Science & Technology Technology
Sathyabama University
Chennai, India Chennai, India. Chennai, India.
[email protected] [email protected] [email protected]
[email protected] c.in
Abstract- Music streaming services now make it simple unable to address the issues of cold starts and rating
to listen to a wide variety of music. Consumers are diversity. Furthermore, music Preferences will shift
increasingly relying on recommendation systems to help based on the user's current mood. These online
them choose appropriate music at all times. However, services will fall short of user expectations if these
there is certain chances for improvement in terms of
customization and emotion-based suggestions. difficult ies are not addressed. The approach provided
Furthermore, music tastes will change depending on the here for the music reco mmendation system has the
user's current mood. If these issues are not solved, these major virtue of being extremely accurate and precise.
online services will fail to meet user expectations. This
research study shows how to create a personalized
music recommendation system based on listener II. LITERATURE REVIEW
thoughts, emotions, and facial expressions. A
recommendation system is created using a combination
of artificial intelligence technology and generalized In the proposed study of Ahlam Alrihaili et al.
music therapy approaches to help people choose music
for different life situations while maintaining their (2019), the system detects emotions, and if the
mental and physical health. individual is experiencing [1]. A selection of songs
that will help him feel better when experiencing a bad
Keywords: K-Nearest Neighbor(KNN), Convolutional emotion will be provided. If the identified feeling is
Neural Network (CNN), Deep Learning, Artificial positive, on the other hand, a suitable playlist will be
Intelligence, Machine Learning. provided, which will co mprise various sorts of music
that will boost the good emotions.
Ankita Mahadik et al (2021), the model can reliably
I. INTRODUCTION identify 7 anger moods[2]; contempt, terror, and joy,
sorrow, neutrality and surprise with an accuracy of
It may be difficult to find the ideal song for a about 75%, and the android application can play
certain user among the millions of songs available on music appropriate for the mood.
internet streaming platfo rms. The majority of music Ashish Patel et al (2018), a CNN model was
recommendation systems are based on user ratings emp loyed to detect emot ions through facial
and song acoustic properties. These solutions are expressions to improve the app[3]. The user can enter
words or a facial expression into the application. It
suggests music and playlists to the us er based on the
emotion sensed, just like our model.
Dan Wu et al (2019), uses a facial expression-
based neural network-based method to propose songs
based on the user's mood [4]. Th is approach is more
effective than earlier approaches since it does not III. III. EXISTING SYSTEM
require users to first look for and create a particular Modern music reco mmendation systems use a
playlist. The ability to determine a person's mood variety of data types, such as the user's listening
relies heavily on facial exp ressions. To capture a history, playlist data, and metadata. Several of these
face, a webcam or ca mera is utilized, fro m wh ich the systems utilize mach ine learning methods to
input retrieved. This informat ion is also used to determine user preferences and provide customized
determine a person's mood. suggestions.
To accomp lish experimental results on the Signal In addit ion, face recognition algorith ms may be
data from GSR and PPG are 32 subjects with and able to infer emot ions from facial exp ressions. These
without Fusion of features [5], the authors Deger methods are often used in fields such as marketing
Ayata Deger Ayata et al (2018) instead emp loyed and user experience research, but their usefulness to
decision trees, random forests, support vector music recommendation is limited.
mach ines, and k-nearest neighbours are among
examples. Extensive trials using real data have Thus, a system that integrates face recognition with
verified the precision of the proposed emotion real-t ime music suggestion based on facial emotions
categorization system, wh ich can be imp lemented would be an innovative and creative method for
into any recommendation engine customized music reco mmendation. Such a system
would include face recognition technology and
The author, Vijay prakash Sharma et al (2021) will mach ine learning algorithms to assess the emotions
review the following works in [6]: A Music sensed from facial expressions and provide real-time
Reco mmendation System Based on Continuous music suggestions depending on the user's emotional
Contextual Informat ion Co mbination and Smart -DJ, state. This would be a d ifficult undertaking, but it
Novelty Research on Music Reco mmendation Using may lead to a more interesting and customized music
Graphs: Smart Phone Music Recommendations listening experience.
Context-Aware Personalization These tools are made
IV. PROPOSED SYSTEM
to help customers find new, personalised music. For
the analysis, they will make use of the data set CNN is needed as it is an important and more
provided by Douban Music. accurate way for image classification problems.
Goonjan Jain et al (2021), The results of show CNN's suggested real-t ime music recommendation
that each group listens to a range of musical genre system would need a large library of photographs and
[7]. Most communities have more people who listens videos including faces and their accompanying
to "universal genres" and are distinguished by having emotions. Photos would be preprocessed to identify
less people who listen to "universal genres" (like faces and extract relevant features, such as facial
reggae). Experts are the most centrally located nodes landmarks and facial expressions. A CNN model
within a co mmun ity. A co mmun ity expert 's would learn the lin k between facial exp ressions and
judgements are used to in form reco mmendations to emotions using this data. The output of the CNN
other community members. model would be used to Deliver real-t ime music
Huihui Han et al (2018), To ascertain The feature suggestions based on the user's emotional condition.
quantity mel frequency cepstral coefficient (MFCC), This would include merging the model with a music
evaluate the musical content's features [8]. After that, recommendation system will produces personalized
the feature quantities are clustered to compress the recommendations according to the user's listening
music feature values. history, playlist data, also other relevant information.
Shun-Hao et al (2018), the system then uses the Ult imate system would use facial recognition
collaborative filtering method to compute suggestion technology and mach ine learning algorithms to
results and find the user's possible interests [10]. provide a more interesting and tailored music-
Then, based on the genes of the collected music, each listening experience based on the user's emotional
piece of music is given a weight. Following weight state.
selection, the song with the highest prior preference is
A. Data Preprocessing and Model Training
used as a recommendation. Finally, weighted
combination and filtering are used to produce Data Preprocessing is a critical step in developing
recommendations based on two recommendation a music reco mmendation system based on real-time
findings. face emotions. This module involves several
important steps, including data collection, cleaning,
preprocessing, augmentation, and splitting.
FIG1:FACIAL EXPRESSIONS
V. RESULTS
VI. CONCLUSION
Music has a powerful impact on our emotions, and the
development of emot ion based music
recommendation systems that use facial recognition
technology can enhance our listening experience.
These systems analyse the user's facial exp ressions
in real-time to determine their emotional state and
generate music reco mmendations that match their
mood. The accuracy and robustness of the facial
recognition technology and algorithm are crucial to
the effectiveness of these systems. It is essential to
have a large dataset of facial exp ressions and
corresponding emotional states to train the algorithm
effectively. The algorith m should also take into
account the user's musical preferences, including
their favourite genres, artists, and songs, to generate
relevant reco mmendations. To provide an
optimal user experience, thes e systems should
generate music recommendations in real-time with
low latency and fast response times. The seamless
and uninterrupted flow of music reco mmendations
as the user's emotional state changes can enhance
their listening experience and improve their mood.
Therefore, these systems require efficient algorith ms
and hardware optimizat ion to min imize latency and
provide fast response times. The development of
these systems has the ability to change the way we
live discover and listen to music. They can provide a
personalized music listening experience that matches
our emotional state, enhancing our mood and overall
well-being. As the availability of facial recognition
technology and the popularity of music
recommendation systems continue to grow, we can
expect to see more development in this area, leading
to more sophisticated and effective systems in the
future. CNN takes an input image (a p × q feature
matrix) and through its hidden layers conducts
feature extract ion and classification is Done.By
using the KNN Algorith m, Emotion Recognition as
improved. By increasing the number of training
12. Jany Shabu, S.L., Bharath Vinay Reddy, S., Satya Ranga Vara
Prasad, R., Refonaa, J., Dhamodaran, S,” COVID-19
Detection Using X-Ray Images by Using Convolutional
Neural Network”, Lecture Notes in Networks and
Systemsthis link is disabled, 2022, 458, pp. 569–575