AI Anthropomorphism
AI Anthropomorphism
AI Anthropomorphism
A R T I C L E I N F O A B S T R A C T
Keywords: This paper examines how users o anthropomorphised articially intelligent (AI) agents, which possess capa-
Articial intelligence bilities to mimic humanlike behaviour, relate psychologically to such agents in terms o their sel-concept. The
AI proposed conceptual ramework species dierent levels o anthropomorphism o AI agents and, drawing on
Anthropomorphism
insights rom psychology, marketing and human–computer interaction literature, establishes a conceptual link
Sel-congruence
Sel-integration
between AI anthropomorphism and sel-congruence. The paper then explains how this can lead to sel–AI
Personality traits integration, a novel concept that articulates the process o users integrating AI agents into their sel-concept.
However, these eects can depend on a range o moderating actors, such as consumer traits, situational ac-
tors, sel-construal and social exclusion. Crucially, the conceptual ramework species how these processes can
lead to specic personal-, group- and societal-level consequences, such as emotional connection and digital
dementia. The research agenda proposed on the basis o the conceptual ramework identies key areas o interest
that should be tackled by uture research concerning this important phenomenon.
1. Introduction humanlike AI. For instance, prior work examined to what extent
anthropomorphic agents evoke empathy and trustworthiness (Złotowski
The articial intelligence (AI) industry is expected to reach $1811.8 et al., 2016) or consumers' acceptance o them (Xiao and Kumar, 2021);
billion in revenue (Grand View Research, 2022) and to contribute $15.7 to what extent users engage with such agents (Hollebeek et al., 2021);
trillion to the global economy by 2030 (PwC, 2017). This trend is tightly and how AI aects brand-related responses such as loyalty (Gaustad
linked with a widespread integration o AI across a range o sectors, e.g., et al., 2018; Lu et al., 2019). Moreover, there have been important ad-
education, retail, companionship and entertainment (Furman and Sea- vances with respect to the role o anthropomorphised AI in service
mans, 2019; Liu et al., 2020; McLean and Osei-Frimpong, 2019a), where quality (Yoganathan et al., 2021), service experience (McLeay et al.,
people employ AI or a variety o tasks including speech recognition, 2021) and usage intention (Xu et al., 2020). Yet, to date, extant research
personalised recommendation, problem solving, and data processing has overlooked the relationship between users and anthropomorphised
(Davenport and Ronanki, 2018). A crucial phenomenon to note within agents rom the perspective o users' identity – in other words their sel-
this expansion is the ever-improving anthropomorphism o this tech- concept (Karanika and Hogg, 2020; Sirgy, 1982) – despite explicit calls
nology. AI agents seem progressively more humanlike, not only in terms or such investigation (MacInnis and Folkes, 2017).
o their physical appearance, but also in the way they mimic emotions This knowledge gap is surprising and important or two reasons.
and the personality traits they appear to possess (Aggarwal and McGill, First, sel-concept is one o the key determinants o how users may
2007; Epley, 2018; Zhou et al., 2019). respond to external stimuli (Sirgy, 1982) and how they may engage with
Despite this rapid adoption o anthropomorphic AI in many areas o technology (Marder et al., 2019). Second, any eects on or changes to
human activity, little is understood about how users relate to such AI the sel-concept can have proound eects on an individual's well-being
agents rom the perspective o their own identity. This lack o attention (Cross et al., 2003), consumption habits (Mandel et al., 2017) and social
to the eect on users' sel-concept is a salient research gap in the ongoing interactions (Slotter and Gardner, 2014). There is a breadth o research
examination o anthropomorphism and AI, despite growing research on demonstrating that individuals relate psychologically to technologies
* Corresponding author.
E-mail addresses: [email protected] (A. Alabed), [email protected] (A. Javornik), [email protected] (D. Gregory-Smith).
https://doi.org/10.1016/j.techore.2022.121786
Received 8 March 2021; Received in revised orm 30 May 2022; Accepted 1 June 2022
Available online 15 June 2022
0040-1625/© 2022 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
and non-animate entities (Hollenbeck and Kaikati, 2012), which can in Our research also species implications or managers in the eld o
turn expand their sel-concept (de Kerviler and Rodriguez, 2019) and AI and digital marketing, helping them to understand how anthropo-
momentarily transorm it (Javornik et al., 2021; Yim et al., 2018). But morphic AI agents should be developed to yield meaningul interactions
how do users relate to anthropomorphised AI agents, who might appear (Davenport et al., 2020), while accounting or potential negative con-
and behave like humans and, in some cases, even display humanlike sequences. We conclude with a research agenda outlining uture
personalities? This research seeks to uncover these processes and to research directions in terms o theory, context, and methodology.
respond to related knowledge calls in this area o research. Notably,
MacInnis and Folkes (2017) highlight the need to understand in what 2. Anthropomorphised AI
way consumers might perceive themselves as congruent with anthro-
pomorphised AI. Recently, McLeay et al. (2021) also signposted the need AI exists in dierent ormats and has been applied across a wide
or urther investigation on how consumers perceive the humanness o range o contexts thanks to its capability o operating in an intelligent
robots and how that alters the interactions and associated consequences manner. While Shankar (2018, p. 6) dened AI as “the programs, al-
in service contexts and beyond. Moreover, as called or by Yoganathan gorithms, systems and machines that demonstrate intelligence”, Huang
et al. (2021), we account or individual characteristics and other po- and Rust (2021) oer more specic categorisation o AI, notably as
tential moderators that can signicantly alter interactions with AI and mechanical, thinking and eeling AI. Specically, mechanical AI is used
the underlying psychological processes. to perorm transactional tasks and replace human intelligence. Thinking
We aim to overcome this gap by putting orward a conceptual AI is used to augment human intelligence with utilitarian services such
ramework that species the relationships between anthropomorphised as analytics or diagnostics. Finally, eeling AI can be used or experience-
AI and identity-related processes, namely sel-congruence and sel–AI based and emotional tasks, where AI agents such as chatbots can interact
integration; the latter is a new key concept that we propose in this area. with customers and convey empathy and elements o social interaction
We ormulate teen research propositions that postulate specic re- in customer service (Huang and Rust, 2021). Such eeling AI diers
lationships in the conceptual ramework, which also considers the signicantly rom sel-service technologies with mechanical and
impact o this process at individual, group and societal levels. As such, thinking applications (Wirtz et al., 2018).
we oer several novel contributions. While there is this notable diversity o AI categories and applications,
First, we contribute to the growing body o literature on the role o a key capability across dierent AI categories is that it can mimic
anthropomorphised interaces o AI agents (McLeay et al., 2021) by intelligent human behaviour or traits (Syam and Sharma, 2018) by
highlighting the importance o users' identities in this context (Araujo, relying on technological advances such as machine learning, natural
2018; MacInnis and Folkes, 2017; Marketing Science Institute, 2018). language processing, speech recognition and image recognition
We speciy sel-congruence and sel–AI integration as key concepts that (Davenport et al., 2020). These enable the anthropomorphism o AI,
mediate the eects o anthropomorphism on a range o user responses. which can be conveyed in a variety o eatures. Table A provides an
Thereore, we contribute to prior literature that identied sel- overview o anthropomorphised AI examples across various domains
congruence as an impactul driver o users' responses in both digital (retail, education, gaming, administration, etc.) and species anthro-
and ofine environments (Abosag et al., 2020; Aw et al., 2019). This pomorphised characteristics and their key audiences (B2B or B2C).
aspect is particularly important in a marketing context, where it can Tesla, Birchbox, Stitch Fix and Lowe are some o the many brands that
impact relationships with brands and products (Büyükdağ and Kitapci, have integrated AI into their products and subsequently anthro-
2021; MacInnis and Folkes, 2017; Karanika and Hogg, 2020). pomorphised visual and/or auditory cues to bring such products to lie
Second, this work extends prior research on how individuals relate to (Salles et al., 2020). AI products are, or instance, assigned gendered
or integrate the resources o inanimate objects into their perceptions o names (e.g., Amazon's Alexa). They also display human appearances (e.
themselves (Delgado-Ballester et al., 2017). We build on the sel- g., IPsot's Amelia, Genesis Toys' My Friend Cayla Doll) and interactive
expansion theory (Aron and Aron, 1986) as we examine whether in- personalities (e.g., Cleo).
dividuals can establish a deeper psychological tie with anthropomorphic The potential o anthropomorphised AI is generating substantial in-
AI agents and perceive them as part o their sel-concept. Prior research terest (van Doorn et al., 2017), with researchers highlighting the value
already demonstrated that users could integrate customised products o anthropomorphism. In service settings, Xiao and Kumar (2021) and
and brands as part o who they are (Troye and Supphellen, 2012). We Sheehan et al. (2020) identiy anthropomorphism as one characteristic
postulate that those interacting with anthropomorphised AI agents can o robots that would prompt customer acceptance and adoption. More-
in some cases extend the sel through “sel–AI integration”. Through this over, Yoganathan et al. (2021) advocate high levels o anthropomor-
novel concept we theorise the intimate connection between the sel- phism, as it improves user evaluations o aspects o robots' social
concept and humanlike AI agents. cognition, such as warmth. The relevance o anthropomorphism is also
Third, we consider the potential boundary conditions that these urther emphasised in relation to other aspects o service quality, as
processes are likely to encounter, building on recent conceptual rame- research shows that it improves customer engagement (McLeay et al.,
works, such as those by Xiao and Kumar (2021) and Blut et al. (2021), 2021), customer satisaction (Choi et al., 2021) and willingness to pay
who highlight actors moderating user intention and actual adoption o (Yoganathan et al., 2021). However, most research models currently
AI. We identiy variables that potentially moderate the relationships in distinguish between high and low levels o anthropomorphism but do
our ramework: specically, user-related characteristics (e.g., person- not speciy in more detail whether dierent types o anthropomorphism
ality traits, sel-construal) and situational actors (e.g., social exclusion, (e.g., physical, personality or emotional) may maximise these outcomes.
amiliarity and individual knowledge o AI). Moreover, prior research studied how users perceive anthro-
Finally, we add to the current understanding o the eects o pomorphised characters (Aggarwal and McGill, 2007; Unal et al., 2018),
anthropomorphised AI and robotics by addressing the outcomes o these but little attention has been paid to how users relate to these anthro-
sel-related processes at the individual, group and societal levels pomorphised agents rom the point o view o their sel-concept (Mac-
(Davenport et al., 2020; Kamolsook et al., 2019; Mao et al., 2020). We Innis and Folkes, 2017). We build on this prior work by studying how the
highlight not only the positive reactions towards anthropomorphic AI anthropomorphic cues o AI agents can activate the “human” schema
agents (e.g., human likeness, acceptance) (Dietvorst et al., 2018; Mende (Aggarwal and McGill, 2007), which may allow users to eel congruent
et al., 2019) but also the potential drawbacks (e.g., data privacy, with that type o AI or even integrate it as part o their own identity.
perceived autonomy) (Leung et al., 2018) and societal implications, thus
contributing to related work (Davenport et al., 2020; Huang and Rust,
2018, 2021).
2
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
Table A
AI applications and usages.
Domain o Sector Examples Description Anthropomorphic Year o Scope o Technology Used
Application Features Introduction Application
Retail Customer Amelia Chatbot or dierent purposes, such Human appearance, 2017 B2B/B2C Natural language
service as managing customer care, solving gendered voice, processing
IT and HR services, and involved in gendered name
sectors such as banking, etc.
Online sales BotCore Conversational customer relations Conversational 2016 B2B Natural language
management chatbot that humanlike manner processing
automates redundant sales tasks
and manages outbound sales eorts.
Communication Maps and Apple's Siri A built-in, voice-controlled virtual Gendered voice, 2010 B2C Speech
transport assistant that is exclusive to Apple gendered name Recognition and
users. The personal assistant Natural Language
answers questions and understands Processing
relationships and contexts.
Food and Microsot's A personal virtual assistant that is Gendered voice, 2014 B2C Speech
restaurants Cortana exclusive to Microsot users. It sets gendered name Recognition and
reminders, keeps notes and lists, Natural Language
and takes care o tasks. Processing
Cultural and Amazon's Alexa Hands-ree speaker rom Amazon Gendered voice, 2014 B2B/B2C Speech
social that can be voice controlled. It acts gendered name Recognition and
activities as a virtual assistant that can Natural Language
interact by voice, play back music Processing
and stream podcasts and can be
used as a home automation system.
Educative Teaching IBM's Jill AI teaching assistant that helps Gendered name 2016 B2C Natural language
assistants Watson students online by answering processing
questions about the curriculum.
Special Muse AI teaching assistant that ocuses on N/A 2016 B2C Machine learning
education helping parents in developing traits
in their children or better lie
outcomes, such as emotional
regulation, sel-control and long-
term persistence.
Personalised Duolingo AI platorm or virtual language Avatar customisation, 2011 B2C Machine learning
education learning that curates personalised gendered voice / Natural
content to the individual. language
processing
Music Music SoundHound Voice-enabled AI technology that Gendered voice 2015 B2B Speech
discovery allows businesses to integrate voice Recognition /
and conversational intelligence into Natural language
their products. processing
Gaming Gaming The OpenAI Gaming AI platorm that plays 180 N/A 2011 B2B Machine learning
Five years' worth o games against itsel
every day. This technology learns
via sel-play.
Toys Genesis Toys – Interactive ashion doll that can Gendered name, human 2014 B2C Speech
My Friend answer questions, play games, read appearance Recognition /
Cayla Doll stories, etc. customisation, gendered Natural language
voice processing
Administration Finance Cleo AI assistant that helps users in Gendered name, 2016 B2C Machine learning
managing their nances. The interactive personality
assistant analyses spending, sets
budgets and provides actionable
insights.
Digital Abe AI-powered banking solution that N/A 2017 B2B/B2C Natural language
integration empowers banks and credit unions. processing
It partners with digital banking
providers, data insight providers
and aggregators.
Mosaic AI assistant that compares a user's N/A 2018 B2C Machine learning
resume to a job opening by / Natural
identiying the needed keywords. language
processing
Diagnostics Medication Ada AI platorm that is ounded by N/A 2016 B2B/B2C Machine learning
doctors, scientists and industry / Natural
pioneers to address personal health. language
It helps people to manage their processing
health and helps medical
proessionals to deliver attentive
care.
Health AiCure AI chatbot that provides health N/A 2010 B2B Computer vision /
monitoring inormation based on Q&A with Image
patients. It helps clinicians to recognition /
monitor their patients' treatment by Machine learning
(continued on next page)
3
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
Table A (continued )
Domain o Sector Examples Description Anthropomorphic Year o Scope o Technology Used
Application Features Introduction Application
4
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
Table B
Overview o key literature.
Topic Author Technology Used Methodology Key Findings
Articial Huang and Rust Service robots Conceptual The authors suggest our types o AI systems: mechanical, analytical, intuitive and
Intelligence (2018) empathetic. They explain that rms should decide whether to hire AI services
depending on the nature o the task and services. These task levels o intelligence also
predict the timing o when human service labour would be replaced by AI.
Huang and Rust Service robots Conceptual This paper renes Huang and Rust's (2018) our types o AI systems into three:
(2021) mechanical, thinking and eeling. Mechanical AI reers to the automated services that
can be used or standardisation. Thinking AI reers to the automated services that can
be used or personalisation, and eeling AI can be used or relationalisation.
Anthropomorphism Rauschnabel and Brands Quantitative Anthropomorphism positively links to stronger consumer–brand relationships, which
Ahuvia (2014) leads to sel–brand integration. Also, consumers love brands that are congruent with
the way they see themselves.
Lu et al. (2019) Service robots Mixed Anthropomorphism is dened as a critical dimension or technology acceptance.
Methods Results also suggest that consumers look at AI robots as hedonic systems. However,
designing intelligent products with humanlike appearances might threaten the
consumer's identity.
Mende et al. (2019) Service robots Experimental Consumers engage in compensatory responses when they interact with humanoid
service robots. Compensatory responses result rom the eeling o discomort or having
one's identity threatened. These responses are moderated by the user's social
belongingness, the perceived healthulness o ood and the extent to which robots are
mechanised.
Longoni et al. (2019) Service provider Experimental Consumers resist using AI systems because o their neglect o uniqueness (i.e., AI is not
capable o relating to the customer's unique identity).
Melián-González Chatbots Quantitative Consumers' intentions to use chatbots depend on the ollowing actors: the chatbot's
et al. (2021) expected perormance, being in the habit o using chatbots, social infuences, the
hedonic component o using them, and how chatbots act like humans. The study also
shows that innovativeness will result in more avourable attitudes towards chatbots.
Xiao and Kumar Anthropomorphised Conceptual The conceptual ramework discusses the antecedents and consequences o rms
(2021) robots adopting robotics in a customer service context. The authors explain that robot
anthropomorphism can positively contribute to the customer's acceptance o robots,
which in return impacts customer satisaction and customer emotions. They discuss
customer and employee characteristics (i.e., readiness and demographics) that shape
the user–robot relationship.
Yoganathan et al. Anthropomorphised Experimental In contrast to sel-service machines, consumers reported higher social cognitive
(2021) robots evaluation (e.g., perceived warmth, perceived competence) when humanoid robots
were involved. The social presence o these robots contributed to higher service quality
as it was induced by the robots' anthropomorphic eatures.
Sel-Congruence MacInnis and Folkes Anthropomorphic Conceptual Users are able to perceive brands in humanlike orms, viewing them to have distinct
(2017) brands mind and personality traits. The perceived personality that is consistent with a user's
sel-concept will contribute to perceived similarities and humanlike relationships with
these brands.
Sel-Integration Troye and Branded product Experimental Users who engage in sel-production (e.g., using a dinner kit to make a meal) value the
Supphellen (2012) sel-produced outcome and develop links between the outcome and the sel. In return,
users can integrate products that they engage with, viewing them as a part o who they
are as they transer positive aect rom the sel to the outcome.
Delgado-Ballester Anthropomorphic Quantitative The integration o an anthropomorphised brand in one's sel happens because: (1)
et al. (2017) brands individuals can relate to a brand's characteristics (cognitive incorporation); and (2) the
anthropomorphised brand has a social identity that helps users to dene themselves
(social meanings).
Delgado-Ballester Anthropomorphic Experimental Anthropomorphism and the user's liking o brands positively impact sel–brand
et al. (2019) brands integration.
Sel-Construal Kwak et al. (2017) Anthropomorphic Experimental Compared with individuals with an interdependent sel-construal, independents
brands experience high perceptions o distributive injustice due to the brand's
anthropomorphism. On the other hand, interdependents have less negative perceptions
o distributive injustice but more negative perceptions o procedural injustice due to
the brand's anthropomorphism.
Mourey et al. (2017) Smartphone/vacuum Experimental A high level o anthropomorphism contributes to a reduction in the need to exaggerate
one's social connections, the willingness to take part in prosocial behaviour and the
need to engage with others in the uture. These eects are driven by the need or social
assurance.
3.1. Building block one: self-congruence with anthropomorphised AI Individuals requently evaluate whether products' or brands' cues
agents and symbolic meanings are in some ways similar or congruent to their
own sel-concept, which they try to reinorce or conrm (Sirgy, 1982;
3.1.1. Anthropomorphism of AI agents and self-congruence Sirgy et al., 2000). For instance, users may experience congruence with
Anthropomorphism is the process o attributing humanlike motiva- brands in terms o gender (Grohmann, 2009), personality (Fennis and
tions, emotions or characteristics to real or imagined non-human entities Pruyn, 2007) or reerence groups (Escalas and Bettman, 2003). The
(Airenti, 2018; Epley, 2018). AI agents are a prime example o anthro- humanlike cues expressed by anthropomorphised products can activate
pomorphism due to their ability to mimic human behaviour and the human schema (Aggarwal and McGill, 2007). In return, users can
appearance, which in turn allows them to engage socially with humans identiy similarities between the anthropomorphised products and the
(van Doorn et al., 2017). These agents may embody a myriad o hu- human schema (Van den Hende and Mugge, 2014) by relating products'
manlike cues, such as various physical, personality-related and characteristics to their sel-concept (MacInnis and Folkes, 2017). Spe-
emotional traits. cically, the symbolic meanings o AI agents, such as their abstract or
5
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
image-based associations (i.e., images portraying the agent's personal- Studies o the uncanny valley eect concerning articial agents are
ity), may be aligned with the user's sel-concept and match the user's contradictory. Some suggest that the more resemblance to humans that
personality. Users ascribe internal characteristics, such as emotions or users perceive a robot to have, the more likeability occurs, but extremely
mental states, to inanimate objects, and this can make them experience humanlike robots that do not have the eatures o a typical machine
congruence with these entities (MacInnis and Folkes, 2017). This is were despised by users (Mathur and Reichling, 2016). In contrast, recent
particularly relevant with eeling AI applications (Huang and Rust, research studying the uncanny valley eect in the context o AI and
2021) that have emotional capacities to help users express their eelings services did not nd a correlation between this eect and anthropo-
better (e.g., Replika, an emotional assistant; Cleo, a nancial assistant). morphism (Blut et al., 2021; Li and Sung, 2021). Other recent literature
Such agents can even be used or relationalisation – to build personalised also demonstrates that humanoid robots, in contrast to mechanical ro-
relationships – as they are able to handle data specic to an individual's bots, would stimulate positive social cognitive evaluations, such as
emotions (Huang and Rust, 2021). Empirical evidence can be ound in perceived warmth (Choi et al., 2021) or competence (Yoganathan et al.,
orum discussions, where Replika users comment that the app “treats me 2021). This suggests that the uncanny valley eect is less o a concern in
like a mirror to her thoughts”, “he's actually becoming like me” and “he's the case o anthropomorphised AI agents, possibly because o the
just like me” (Hoeornamjoon, 2021; Rashy, 2020; Shiaislam, 2020). increased prevalence o AI agents in the user's daily lie (Li and Sung,
We propose that consumers are likely to draw parallels between the AI 2021).
agents and themselves, to compare the humanlike traits o the AI agents Another set o physical cues that AI agents carry include voice (e.g.,
to their sel-concept, similarly to how they evaluate brands or products Siri, IKEA's Anna), acial expressions (e.g., My Friend Cayla Doll, Ame-
(MacInnis and Folkes, 2017) (see Fig. B). However, the eects o lia) or gendered names (e.g., Jill Watson, Replika) (see Table A). We
anthropomorphised agents on sel-congruence can potentially dier, argue that these traits convey to the user their resemblance with the AI
depending on the traits that are being anthropomorphised – physical, agent. For example, AI agents with a physical appearance that is similar
emotional or personality traits. to the user's are perceived as members o a reerence group (Kuchen-
brandt et al., 2013). The agent's physical traits may also carry social
3.1.2. Self-congruence with AI Agents' physical traits meanings that are close to the user's social identity, which would acil-
In the anthropomorphism literature, Epley et al. (2007) and Aggar- itate the user eeling congruent with such an agent (Sirgy, 1982).
wal and McGill (2007) were some o the rst to discuss the importance Conversely, agents without any attributes similar to those o the user are
o the object's physical traits that match the human schema. Indeed, treated as out-group members (Złotowski et al., 2015). Thereore, we
developers and marketers may make the human schema easily acces- suggest:
sible by designing their products to have humanlike appearances,
reerring to them in the rst person or assigning them human names and P1a: Anthropomorphism that is based on human physical traits in AI
genders (e.g., Kellogg's Tony the Tiger, Procter & Gamble's Mr. Clean, agents leads to users' sel-congruence with such agents.
Nintendo's Mario, etc.) to make them look amiliar (Aggarwal and
McGill, 2012). Viewing entities in humanlike terms makes it easier or 3.1.3. Self-congruence with AI Agents' personality traits
users to evaluate the anthropomorphic cues against their own sel and Studies in consumer behaviour suggest that the personality cues
iner degrees o similarities with their sel-concept (MacInnis and Folkes, associated with products are considered as symbolic and that consumers
2017). may perceive them as (dis)similar to their own personality (McCrae and
One o the prominent physical traits o AI agents is their appearance, Costa, 2003). Studies in social psychology have equated the process o
which may dier based on their task types. Some o the most common brand sel-congruence with the process o choosing riends: “just as
types are: mechanoids with mechanistic appearances that lack human- people take care in choosing riends who have a similar personality to
like eatures; humanoids that imply humanlike traits (e.g., a head, eyes, themselves, so brands, which are symbolic o particular images, are
hands, acial eatures) but without close resemblance to humans; and chosen with the same concern” (de Chernatony et al., 2003, p. 131).
androids, which are robots whose appearances and behaviours are hu- Since users treat technological entities as social actors (Nass and Moon,
manlike (van Doorn et al., 2017; see Fig. C). A critical issue debated in 2000), they can identiy with AI agents that possess a personality similar
the anthropomorphism literature is the “uncanny valley eect”, which to themselves.
explains eelings o uneasiness when interacting with extremely hu- Anthropomorphism acilitates the evaluation o inanimate objects'
manlike robots (Mori, 1970), as the over-humanisation o robots causes personalities, since humanlike personality cues activate the human
discomort (Mori, 2012; Schmitt, 2020). Schmitt (2020) argued that schema (Landwehr et al., 2011). Such cues can establish a closer link
users may experience a biased perception o humanlike robots in com- with users and their sel-concept, allowing users to identiy similarities
parison to other humans because the two belong to dierent species, between the object and themselves (Usakli and Baloglu, 2011).
despite exhibiting similar physical or personality traits. Indeed, highly Anthropomorphised AI agents such as Cleo, a nancial assistant, and
humanlike robots can also be experienced as a threat to the user's human Woebot, a mental health counsellor, have distinct personalities, which
identity, as they can appear to undermine the user's human uniqueness also allows them to occupy dierent social roles. Users may consider
or the distinctiveness o the human species (Ferrari et al., 2016; Mende these personalities as a cognitive category to relate to in humanlike
et al., 2019). terms (Chen et al., 2015). For instance, users may choose AI agents that
6
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
Fig. C. Types o robots: Roomba vacuum cleaning robot (let); Rolling Partner Robot (middle); Android Sophia (right).
act as their riend through their social roles (e.g., Replika, Woebot) and example, Replika is an emotional support agent in the orm o a virtual
that demonstrate a similar personality. Consumers might also consider avatar that communicates emotions through its texting style and learns
themselves to be tech-savvy or intelligent. I anthropomorphised AI how to personalise conversations and relationships (Huang and Rust,
agents were to convey their personality as being savvy and knowl- 2021; Huet, 2016). Another example o an empathetic AI application is
edgeable then users might perceive such agents as congruent with Aectiva, emotion recognition sotware used in the gaming industry.
themselves. Thus, i the human schema is primed through an AI agent's These and other examples convey emotional cues and expressions that
personality trait that is congruent with the user's personality, this can allow users to position these objects in the “human” category schema
enable the user to relate the AI agent to their own sel-concept. (Aggarwal and McGill, 2007) and, thereore, in the same cognitive space
Furthermore, ollowing MacInnis and Folkes (2017), the more an AI as the user's sel-image (Quester et al., 2000). In this respect, the
agent's perceived personality is consistent with the user's sel-concept, perceived congruence with anthropomorphised objects does not neces-
the stronger the perceived similarity becomes. Hence: sarily occur based on external appearances but based on the mental
states and emotions that users ascribe to themselves and to the agent. We
P1b: Anthropomorphism that is based on human personality traits in propose:
AI agents leads to users' sel-congruence with such agents.
P1c: Anthropomorphism that is based on human emotions in AI
3.1.4. Self-congruence with AI Agents' emotional traits agents leads to users' sel-congruence with such agents.
In addition to physical appearance and personality cues, perceived
“emotionality” is an essential eature o some AI agents and aects the
acceptance o such agents (Stock and Merkle, 2018). While embedding 3.2. Moderators of self-congruence with anthropomorphised agents
emotions in AI agents remains one o the major design challenges in
robotics, advancements in AI have already allowed or some o its ap- As is the case in most research on human behaviour, relationships
plications to identiy human emotions computationally (Huang and between variables may be subject to urther external and internal in-
Rust, 2018). Research emphasises the role o eeling AI in orming fuences and conditions (Stroessner and Benitez, 2019). We specically
personalised relationships with users (Huang and Rust, 2018). More turn to situational actors and consumer traits as potential moderators,
specically, virtual assistants and chatbots can use cues, voice tones or as prior studies highlight them as important in relation to anthropo-
emoticons to express emotions and thus acilitate social interactions morphism and psychological processes and call or urther investigation
(Fadhil et al., 2018; McLean and Osei-Frimpong, 2019b). into them (Van den Hende and Mugge, 2014). These are visualised in
Emotions are considered an integral component o social interaction: Fig. D.
they aect the robot's likeability (Calvo-Barajas et al., 2020), increase
the perceived value o the robot (Wang and Krumhuber, 2018), and can 3.2.1. Consumer traits
help to rectiy errors in service settings (Choi et al., 2021). While me- Users' interactions with technology do not rely only on the traits o
chanical and thinking AI are well suited to automation and person- the technological gadget but also on the traits o the consumers. Previ-
alisation o certain processes, such as repetitive tasks or customising ous research examined the role o user-related actors, such as de-
communication with customers (Huang and Rust, 2021), AI agents can mographics (Straßmann and Krämer, 2018), customer readiness (Xiao
also ull humans' needs or emotional aection and social belonging and Kumar, 2021), mood (Bishop et al., 2019) and negative attitudes
through their emotional capacity (Wang and Krumhuber, 2018). Feeling towards robots (Miller et al., 2021) to moderate the eect o AI on
AI (Huang and Rust, 2021) is able to analyse and understand users' behavioural outcomes such as trust, acceptance and usage intention.
emotions and tailor the interactions accordingly to users' momentary Moreover, personality traits may also infuence the quality o human-
needs. This can also provide superior customer experience, principally –robot relationships (Robert et al., 2020). While there is prior evidence
because anthropomorphised (vs. non-anthropomorphised) AI agents are that the user's traits, such as extraversion (Robert et al., 2020), inno-
perceived to be warmer (Yoganathan et al., 2021). As relationships are vativeness (Koivisto et al., 2016) and need to belong (Houghton et al.,
built on the sense o liking and similarity between two entities (Abosag 2020), aect behavioural outcomes in a human–robot relationship
et al., 2020), these emotional cues are likely to strengthen the user's (Robert et al., 2020), their impact on users experiencing congruence
eeling o congruence with the AI agents, as the individual can identiy with anthropomorphised agents has not yet been explored.
more easily with such agents.
Empowered by advanced technologies such as sentiment analysis, AI 3.2.1.1. Extraversion. Extraversion and introversion could moderate
agents can iner emotions rom natural language (e.g., text, audio or the relationship between anthropomorphised AI agents and user
video) and can respond in modern cues (e.g., emojis) in online in- congruence with such agents or several reasons. First, the dichotomy
teractions (Cambridge Consultants, 2019; Capatina et al., 2020). For between extraversion and introversion is a critical component that
7
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
aects interpersonal relationships and social behaviours (Ivaldi et al., they provide cost advantage, economic easibility and perceived relative
2017). For instance, the more introverted people are, the ewer social advantage (McKinsey Global Institute, 2017). More importantly,
connections and relationships they have (Gockley and Matarić, 2006). anthropomorphic AI agents complement these innovative traits with a
Second, these traits directly relate to a person's social capacity or level o technical sophistication that resembles humanlike intelligence
orming relationships and to their expressiveness (Robert et al., 2020; (Deloitte, 2018). They display innovative personalities via intelligent
Thorne, 1987). Extraverted users – as opposed to introverted ones – are personality cues, or example conversing, solving complex patterns and
more likely to hold more meaningul interactions with robots (Ivaldi building human relationships in elds such as education and healthcare
et al., 2017; Salem et al., 2015). Third, tendencies to anthropomorphise (McKinsey Global Institute, 2017). Users with a high level o innova-
entities such as robots are strongly related to the user's extraversion tiveness and who have a keen interest in novelty are more likely to
(Kaplan et al., 2019). Crucially, extraverted (vs. introverted) users were perceive such anthropomorphic personality cues as innovative and
shown to experience closer psychological connection and similarity with technologically advanced. In turn, this can lead innovative users to
anthropomorphised robots (Salem et al., 2015) and also to positively experience similarity (i.e., sel-congruence) between themselves and
evaluate robots' symbolic cues (i.e., eye contact, smiling or personality anthropomorphic AI. We postulate that:
traits) (Lee et al., 2006). Consequently, we propose that extraverts are
more likely to perceive anthropomorphic traits (particularly anthro- P3: Anthropomorphism that is based on human personality traits in
pomorphised personalities) as congruent to their own identities. AI agents is more likely to lead to sel-congruence or innovative (as
Formally: opposed to non-innovative) users.
P2: Anthropomorphism o AI agents is more likely to lead to sel- 3.2.1.3. Need to belong. Social connection is a key societal issue (Gier-
congruence or extravert (as opposed to introvert) users. veld et al., 2016), as it correlates positively with emotional resilience
and sel-esteem (Fraser and Pakenham, 2009). As connection with
3.2.1.2. Innovativeness. Individual innovativeness has been established others is not always possible or socially excluded people, they may seek
as a critical actor or user acceptance o technology (Koivisto et al., social connections with non-human entities through parasocial re-
2016). Kim et al. (2010) reer to a person's innovativeness as their ten- lationships that they build in virtual settings (i.e., social network sites;
dency to experiment with new technology or products, as well as to Houghton et al., 2020). The COVID-19 pandemic intensied the societal
accept and welcome new technologies (Graa et al., 2017), such as challenges o loneliness due to social distancing, which oten resulted in
chatbots (Melián-González et al., 2021). social isolation (Odekerken-Schröder et al., 2020). AI companion robots
Anthropomorphised AI technologies (Haener et al., 2021) can be were proposed to mitigate this eect, as they display progressively more
perceived as innovative and thus induce sel-congruence with innova- emotionally intelligent behaviours, such as conversing, responding to
tive consumers. Anthropomorphic cues o AI agents project a high de- social cues and establishing rapport (Jecker, 2021). Indeed, when peo-
gree o innovativeness, which can be relevant to the users that seek sel- ple's need or belongingness is not ullled, they rely on technologies to
verication or sel-expression (Sirgy et al., 2000). AI technologies are compensate or their lack o social connection (Derrick et al., 2009).
oten perceived as innovative products (Venkatesh and Davis, 2000), as The theory o sel-congruence posits that consumers will utilise
8
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
products as tools or sel-expression or to satisy the need to belong to a discomort when aced with extremely humanlike AI. However, the
social group or to orm a partnership (Escalas and Bettman, 2003). increased introduction o AI agents into our daily lives, such as in retail,
People with a higher need to belong would also be more interested in customer service and healthcare (Davenport et al., 2020), provides an
products that ollow social norms and prevent social exclusion (Sirgy, opportunity or repeated interactions to oset this discomort. Such
1982). Anthropomorphism evokes the similarities between the con- repeated interactions can establish users' amiliarity with non-human
sumer and the anthropomorphised robot through social cues (Delgado- agents (Guthrie, 1995), and this may also induce comort with and
Ballester et al., 2019; Nass and Moon, 2000). For instance, AI agents acceptance o such interactions (Martensen et al., 2018).
such as Woebot, Replika, or even Alexa (see Table A) that act as rela- Research shows that amiliarity with another entity does not guar-
tional peers allow users to avoid loneliness or satisy their need to belong antee that the individual would experience similarity with it or a posi-
by providing a supportive relationship (Brandtzaeg and Følstad, 2017; tive attitude towards it (Norton et al., 2007). However, interactions in
Pradhan et al., 2019; Ramadan et al., 2021). The drive or social virtual settings such as social network sites may encourage users to nd
connection allows users to anthropomorphise entities more when they amiliarity cues with dissimilar others to generate positive attitudes
are reerred to in human relationship terms (e.g., “this product is [like] a (Kaptein et al., 2014). The case is likely to be stronger with anthro-
riend”) (Epley et al., 2007). The social cues expressed by these agents pomorphised AI agents, which can achieve amiliarity with users by
may help to develop interpersonal relationships and are also more likely activating the human schema through their humanlike cues. In act,
to be identied by users with a higher need to aliate (Pickett et al., anthropomorphism helps users to acquaint themselves with unamiliar
2004). As such, we propose that the need to belong is a critical boundary objects by attributing mental states to these agents (Epley et al., 2007;
condition that aects users' sel-congruence with AI. Guthrie, 1995). We propose that such amiliarity levels would represent
a key moderating actor or users experiencing sel-congruence with AI
P4: Anthropomorphism o AI agents is more likely to lead to sel- agents. Formally:
congruence or users with a higher (as opposed to lower) need to
belong. P6: High amiliarity (as opposed to low amiliarity) with AI agents
makes it more likely or users to experience sel-congruence based on
3.2.2. Situational factors the AI agent's anthropomorphic cues.
In relation to the user's general perceptions o robots and behavioural
outcomes, prior research has highlighted the importance o situational 3.3. Building block two: self–AI integration with anthropomorphised AI
actors such as task types (Xu et al., 2020), risk perception (Kim and agents
McGill, 2011) and time pressure (Lv et al., 2021). For instance, in-
teractions may dier signicantly depending on the type o AI and While sel-congruence is rooted in perceived similarity between
associated tasks that it perorms (Huang and Rust, 2021). However, onesel and another entity, it does not per se imply undamental changes
other moderating actors might prove crucial or the sel-congruence to the sel-concept. However, individuals can in some cases relate to
with anthropomorphised AI agents. We highlight two such modera- external objects or other people more strongly instead o perceiving
tors: available inormation about AI agents and amiliarity with them. them only as similar – they can identiy with them to the extent that
those entities become part o their sel-concept or even extend it (Belk,
3.2.2.1. Available information about the AI agent. Consumers evaluate a 1988). This is regularly observed in the context o personal and romantic
product's congruence with themselves by comparing available product relationships, as postulated by the theory o sel-expansion, which ex-
inormation with personality attributes rom their own schema (Aguirre- plains how a strong connection with one's romantic partners or amily
Rodriguez et al., 2012). Providing users with background inormation members leads to an integration o these relevant others as part o one's
about the robot may aect their responses (Darling et al., 2015). Users own sel-schema (Aron and Aron, 1986). Importantly, integrating other
who were given inormation about a robot's superior ability compared entities as part o the sel can also occur with inanimate entities. For
with humans perceived them as a threat to their human identity and instance, those products or brands that are particularly powerul in
uniqueness (Longoni et al., 2019; Yogeeswaran et al., 2016). Similarly, activating consumers' identity themes and allow consumers to express
Leung et al. (2018) demonstrated that workers preerred those AI their individual and collective selves can become part o consumers' own
products whose eatures did not hinder their identity-relevant skills. identity (Aaker et al., 2004; Belk, 1988). MacInnis and Folkes (2017)
Thereore, even i the anthropomorphic cues o AI agents communicate highlighted that a connection between the sel and inanimate entities
potentially benecial social meanings, users may not experience sel- might be o particular signicance in those cases where such entities are
congruence i the inormation about the robot conveys a threat to perceived as humanlike in some way. Anthropomorphised AI agents are
their identity in some way. On the other hand, when individuals are an important example o such entities, as their presence and use are
given an empathetic story prior to meeting the robot, they orm more increasing exponentially. Yet, to date, no research exists concerning
avourable attitudes towards it (Darling et al., 2015). Positive inor- whether and how users potentially perceive them as part o their own
mation about the robot can even counteract the negative image that the identities.
media and movies oten depict (Moradi et al., 2018). The lack o
transparency o AI capabilities has been identied as a key challenge, 3.3.1. Self–AI integration
with adverse eects on potential users (Dwivedi et al., 2021). Hence, Prior research has shown that consumers integrate external entities,
depending on the type o inormation provided to users about these such as brands, as part o their sel-schema because they are relevant to
agents, anthropomorphic cues may either reinorce negative impres- them or because they identiy with the brands to some degree (MacInnis
sions and thus prevent sel-congruence rom emerging or may be viewed and Folkes, 2017). Furthermore, Troye and Supphellen (2012) dene
as congruent with users' sel-schema when the provided inormation is “sel-integration” as the extent to which consumers perceive a product to
avourable. Formally: be part o themselves. They demonstrate that consumers can experience
such sel-integration as a result o being highly involved in the process o
P5: The available inormation about anthropomorphised AI agents creating a product and, thereore, experience a degree o relatedness
moderates the eect o such AI agents on users' sel-congruence. between the outcome and the sel. An example o how consumers can
become so invested in products that they see them as extensions o
3.2.2.2. Familiarity with AI agents. Consumers can nd it dicult to themselves (Morewedge et al., 2021) is Coca-Cola's 2014 “Share a Coke”
interact with robots (Marinova et al., 2017), as they may experience campaign. Here the brand invited customers to customise their cans,
which encouraged “investment o the sel” in their products (Kirk,
9
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
2018). In the case o AI, anthropomorphic traits can uel the process o While sel–AI integration may be a consequence o dierent actors
such sel-integration, as humanlike interactions provide social meanings or processes, we ocus here specically on such integration as would
to users and inspire them to invest their attention and emotions into the occur as a result o sel-congruence with anthropomorphised AI agents.
AI. We postulate that the more similarity one can experience with such
To account or these phenomena, we propose a new term: “sel–AI agents, the more emotionally attached to them one would become
integration”. We conceptualise it as a change in the sel-concept that (Sheeraz et al., 2018), thus integrating them into one's sel-concept
occurs when users perceive anthropomorphised AI agents to be so (Escalas and Bettman, 2003, 2005). Further, we also explain that sel-
meaningul or relevant in some way that they become a part o their own construal and social exclusion may moderate this process. The visual-
sel-concept. This is in line with the theories o sel-extension (Belk, isation o this process is presented in Fig. E.
1988) and sel-expansion (Aron et al., 2004), which explain how prod-
ucts, relationships and digital possessions can extend one's sel because 3.3.2. Self-congruence and self–AI integration
users invest resources, such as emotions, in them. The process o sel–AI As stated earlier, viewing brands in humanlike terms allows users to
integration is particularly prominent when AI customises the in- view them as similar to their sel-concept and experience sel–brand
teractions based on users' needs and emotions (Huang and Rust, 2021). congruence (MacInnis and Folkes, 2017). Importantly, when users
Such agents can provide users with personally meaningul relationships perceive traits o an external entity to be similar to those traits that are
and interactions that contribute to their identity in a similar manner to central to their own identities, they are more likely to invest their per-
human relationships (Aron et al., 2013). Building on the ndings o sonal resources, such as attention and emotions (Belk, 1988). Further-
Troye and Supphellen (2012), users may urther experience a degree o more, Morewedge et al. (2021) explain that users integrate those
relatedness to AI agents i they view such agents to be shaped by products that generate sel-relevant signals. Rauschnabel and Ahuvia
themselves. AI actions are thus oten a direct result o users' input or (2014) investigated this link in the context o brands and showed that
interactions and this is particularly the case with emotional AI (Huang consumers are more likely to orm close relationships with those brands
and Rust, 2021). The app Replika is a prime example o an emotional AI that are in some ways similar to themselves. Similarly, Park et al. (2010)
with which users can orge such strong relationships that the app be- explain that the closer a product is to the user's refection o themsel, the
comes instrumental to their lives. stronger the bond that the user creates with the brand.
Importantly, anthropomorphism can acilitate this sel-integration We propose that such a link between sel-congruence and sel-
either via cognitive incorporation, where one may think o an object integration may also emerge in the case o AI, because anthro-
in humanlike terms in order to link it cognitively to one's own sel- pomorphised cues can evoke sel-congruence, as previously postulated.
concept, or when the anthropomorphised entity generates meanings in When AI users experience sel-congruence, they are more prone to
social contexts and provides social aliation (Delgado-Ballester et al., investing sel-related resources, such as attention, personal inormation
2019). Anthropomorphised AI agents can generate social meanings by and preerences, making the AI agent meaningul to themselves and thus
creating an impression o social presence (Van Doorn et al., 2017). Ex- integrating it as part o their own identity. Formally:
amples include Cleo, a nancial AI assistant, which can establish its
social presence with anthropomorphic traits such as gendered names or P7: Sel-congruence with anthropomorphised robots can lead to
an interactive personality. Furthermore, users can create commands to sel–AI integration.
customise Alexa's responses, give Siri accents (e.g., British, Indian, Irish
or South American), or even teach Siri their name or nickname. These 3.3.3. Moderators of the relationship between self-congruence and self–AI
advancements, which personiy AI, acilitate users' comort in their integration
conversations with AI agents, making them similar to those they hold The relationship between sel-congruence and sel–AI integration
with other humans (Cerekovic et al., 2016). may urther depend on actors related to users and social context. We
Integrating other entities into one's sel-concept can have important turn to social cognitive theory to examine the role o the self-construal
implications. For instance, users may satisy personal needs, such as concept (i.e., independent sel vs. interdependent sel) in shaping this
social connection (Jeong et al., 2018) or security (Sheldon et al., 2001), relationship. Users' views o themselves (i.e., sel-construal) dier based
or experience warmth (Van Doorn et al., 2017; Yoganathan et al., 2021). on their personal and social identities, which may also aect their views
In extreme cases, they might even put themselves at risk to protect the AI o products (Besta, 2018). Furthermore, users oten ocus on how
agents they eel particularly close to (Darling, 2016) or experience included or excluded they are in social interactions (Baumeister and
trauma i an anthropomorphised entity they have become attached to Leary, 1995). Thus, we also examine a potential moderating role o self-
were to stop unctioning, as was the case with the inamous Tamagotchi exclusion (Chen et al., 2017).
digital pets in the 1990s (Duggan, 2016).
Fig. E. Block two: sel–AI integration as a result o sel-congruence with anthropomorphised AI agents.
10
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
3.3.3.1. Self-construal. Integrating external entities into one's sel- interaction partner that substitutes the lack o social aliation and
concept raises an important question: how does a person's identity acceptance rom real people (Mourey et al., 2017). AI products such as
impact the relationships they orm with anthropomorphised entities Google Assistant and Google Home are also said to acilitate a eeling o
such as AI agents? Social cognitive theory explains that a person's sel- inclusion and autonomy in daily tasks or people with disabilities
construal aects sel-related behaviours. Specically, individuals' con- (Caggioni, 2019).
sumption o products is infuenced by their connection or aliation with Socially excluded users create more proximal relationships with
others and their own sel (Markus and Kitayama, 1991). Sel-construal is anthropomorphised products. Chen et al. (2017) showed that consumers
dened as the totality o eelings, perceptions and actions that relate to preer dierent kinds o relationships with anthropomorphised prod-
one's relationship to others, as well as one's sel-concept being distinct ucts, depending on their belie o how socially excluded they are. Weiner
rom others, i.e., the way one denes and thinks o onesel concerning (1985) stresses that people eel socially excluded due to internal or
the external world (Triandis, 1989; Singelis, 1994). Its key dimensions external attributions. When users internally attribute their social
are the interdependent and independent selves (Markus and Kitayama, exclusion, they blame themselves and experience negative outcomes
1991), reerring to how people perceive themselves as independent rom such as low sel-esteem (Weiner, 1985). Such users tend to ear aban-
or dependent on others. donment and look or trust in relationships (Swaminathan et al., 2009).
People with a strong independent sel (i.e., the “independents”) ocus Conversely, users that attribute their social exclusion externally blame
on distinguishing themselves rom others. They perceive themselves as their surroundings and are less attached to relationships (Collins and
dierentiated rom others as they ocus on their autonomous goals Read, 1990). We extend this reasoning in the context o AI and propose
(Markus and Kitayama, 1991). They are less likely to implicate others in that users who do not eel included in their social surroundings are
their sel-identity, as they view themselves as separated rom the social particularly prone to integrate sel-congruent AI as part o their identity
context and have a greater sense o individuality (Duclos and Barasch, to satisy their need or belongingness and acceptance. Thus:
2014). In contrast, “interdependents” rely heavily on their social roles,
group-image identity and relationships in orming their identity. This P9: As opposed to users that do not eel socially excluded, socially
type seeks out relationships with others who are perceived as in-group excluded users are more likely to integrate those AI agents that they
members, share common goals and are a source o social support, perceive to be congruent with themselves.
which is a core value or interdependence (Markus and Kitayama, 1991).
Sel-construal may be important in the context o AI, as the level o 3.4. Building block three: implications of self–AI integration
aliation with robotic agents can impact the user's responses towards
them (Epley et al., 2007). Users' reactions dier when the view o the sel Prior studies examined how anthropomorphism aects user re-
is rooted in social relationships, as opposed to viewing the sel as sponses (Lu et al., 2019) as well as the potential link between sel-
separate rom others (Kühnen et al., 2001). Interdependents are more congruence and anthropomorphism (MacInnis and Folkes, 2017).
likely to view anthropomorphised objects as potential entities or rela- However, we know little about what consequences might occur at the
tionship ormation and to apply relational norms to such interactions individual level and beyond when AI agents are integrated into the sel-
(Kwak et al., 2017; Yang et al., 2020). concept. Following the concept o sel–AI integration that we have
Extant literature suggests that anthropomorphism alleviates proposed, we articulate several propositions about the eects that this
dierent levels o dissatisaction depending on the user's sel-construal integration may elicit at the personal, group and societal levels (see
in service ailure settings (Fan et al., 2020). In their study, Fan et al. Fig. F). We categorise these outcomes on the basis o previous literature
(2020) explain that when service ailures occur, interdependents (Tuan, 2002). Specically, we dene individual-level outcomes as those
perceive this as a violation o social norms and tend to blame the aulty that comprise the eects o sel–AI integration in the individual realm (e.
technology, as they perceive it as an out-group member due to its hu- g., psychological processes, individual behaviour). Group-level out-
manlike traits Also, Kwak et al. (2017) discuss that interdependents, comes relate to those consequences that are linked to one's group re-
relative to independents, are more likely to associate with anthro- lationships (e.g., social support and inclusion, connection with others).
pomorphised brands because they view them rom a relationship Finally, societal-level outcomes are concerned with the eect o sel–AI
perspective. Hence, anthropomorphised AI agents whose relational cues integration on society as a whole (e.g., the macro-level environment).
convey connectedness or even close relationships (e.g., therapists such
as Replika, trainers such as Aaptiv, etc.) are more likely to become in- 3.4.1. Self–AI integration at an individual level
tegrated within the interdependent's sel-view. This is in contrast to the Recent research highlights that AI agents with humanlike cues and
independent, who psychologically separates themselves rom other en- interactions using speech emotion recognition or sentiment analysis
tities and relies on personal uniqueness. Formally: techniques (e.g., Alexa, Siri) (Huang and Rust, 2021; Schuller, 2018)
may prompt positive customer engagement (Hollebeek et al., 2021;
P8: Interdependents (as opposed to independents) are more likely to McLeay et al., 2021; Xiao and Kumar, 2021). While engagement is a
undergo the process o sel–AI integration with those anthro- multidimensional concept (Brodie et al., 2011) comprising cognitive,
pomorphised AI agents that they perceive as congruent to behavioural and emotional responses, we highlight that users who
themselves. experience sel–AI integration are likely to engage with AI through a
particularly prominent emotional connection. As specied in social
3.3.3.2. Social exclusion. Research in psychology and anthropomor- psychology, namely by Aron et al. (2013), including others in one's sel-
phism postulates that human social behaviour is motivated by the need
to belong (Baumeister and Leary, 1995; Epley et al., 2007). Users that
anthropomorphise non-human entities are more likely to view them as a
social aliation partner (Chen et al., 2017). This can be particularly
prevalent among those that experience social exclusion. Social exclusion
stems rom incidents where people eel let out or rejected in their
environment, or example in relationships with amily members, riends
and colleagues (Baumeister et al., 2005). Thereore, to satisy their need
or aliation, socially excluded consumers have been shown to accept
an anthropomorphised product more readily, as they perceive it to be an Fig. F. Block three: implications o sel–AI integration at individual, group and
societal levels.
11
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
concept is a predictor o strong emotional bonds. Users that integrate benecial group outcomes, critics ear the adverse outcomes o users
anthropomorphised AI agents as part o their sel-concept will thus be being too close to robots, as these might change their perceptions o
likely to connect emotionally with AI agents, consequently also social relationships with other humans. Some users are becoming more
expecting such agents to oer them a support system and ull their comortable interacting with digital assistants than with other humans
emotional expectations, as humans do in relationships (Baumeister and (Christakis, 2019). For instance, users are conessing certain personal
Leary, 1995). In such cases, AI agents can provide social support not matters that they would not share with their spouses to digital assistants,
only in extreme lie experiences, such as losing a loved one, but also in such as Google Assistant. This could threaten the depth o human con-
everyday events. AI could provide important mental health resources (e. nections and thus make individuals more sel-involved and less empa-
g., Replika helping its users to manage anxiety) (Moyle et al., 2013), thetic. According to the sel-expansion theory (Aron and Aron, 1986),
which can ampliy an emotional connection (Van Doorn et al., 2017). individuals eel more connected to partners in close relationships, as
We propose: they eel these validate key aspects o themselves that might otherwise
be ignored (Leary, 2007). Thus, the emotional support that is provided
P10: At an individual level, sel–AI integration acilitates building by AI agents to users might make the users' relationships with other
emotional connections with AI agents. humans appear shallow. Brandtzaeg and Følstad (2017) also highlight
that people with a greater need to belong could build social relationships
Using companion robots has become even more prominent during with robots and turn away rom relationships with humans. This would
the COVID-19 pandemic, as people were deprived o social connections activate the “illusion o a relationship” while undermining the richness
(Jecker, 2021) and at the same time reported an increased trust in o human interactions (Bryson, 2010; Turkle, 2007). The eect is likely
technology (PRNewsWire, 2021). However, as recently highlighted by to be stronger during and ater societal challenges, such as COVID-19,
Puntoni et al. (2021), “exploited consumers” are losing their control when people eel more connected to anthropomorphic robots that are
over personal inormation when disclosing it to AI agents. Importantly, amiliar and can help them in their daily tasks (e.g., digital assistants
vulnerable people that are socially less privileged are at higher risk o that set reminders, provide entertainment and companionship) (Jecker,
experiencing such privacy invasions. Such individuals would regard AI 2021). The counterarguments o these criticisms state that people who
agents as sources or social aliation but are less likely to know how to interact with virtual entities already know that the virtual entity is not
protect themselves rom privacy threats. real and have the reedom not to use it (Coeckelbergh, 2012). We
When sel–AI integration occurs, users are more likely to open up to propose:
their companions and disclose their personal inormation (Aron et al.,
2013); this may raise privacy concerns. While electronic devices collect P13: At the group level, sel–AI integration may make users less
personal inormation about users, the kind o inormation disclosed to empathetic and reciprocal in their relationships with other humans.
robot riends is more intimate (Jecker, 2021). For example, or mental
health AI agents to build relationships and help users to overcome 3.4.3. Self–AI integration at a societal level
anxiety or loneliness, users must share their personal inormation to Current debates around disruptive technologies are asking to what
reap the ultimate benets o these tools. Arguably, anthropomorphic extent technologies such as AI are contributing to a “good society”
technologies, such as chatbots, nudge consumers to sel-disclose because (Wamba et al., 2021) and bringing long-term societal benets (Kamol-
the social cues (e.g., gender, personality, or race) acilitate social in- sook et al., 2019). Humanlike AI can address global challenges, such as
teractions (Grewal et al., 2020; Thomaz et al., 2020; Toosi et al., 2012). people's loneliness (Ramadan et al., 2021), and provide accurate
This may raise ethical issues, such as invasion o privacy as a conse- healthcare diagnostics (Dwivedi et al., 2021; Sidner et al., 2018). Social
quence o AI and robotics usage (Mao et al., 2020). The perceived AI agents can enhance users' well-being and eeling o connectedness in
comort and closeness that users experience with AI agents imposes a a community by helping them to prompt conversations in ofine and
greater risk to the user's privacy because the inormation they share with online environments (Jeong et al., 2018; Prescott and Robillard, 2021).
AI agents is digitally stored and may be misused (Jecker, 2021; Melumad I AI is integrated into the sel-concept, will that inhibit or contribute to
and Meyer, 2020). We propose: society's well-being?
We suggest that sel–AI integration promotes societal well-being, as
P11: At an individual level, sel–AI integration leads to risks o sel- close relationships with anthropomorphised agents allow users to trust
disclosure and invasion o privacy. such agents to carry out everyday tasks and also because psychological
closeness generates higher trust (Valcke et al., 2020). Users that eel
3.4.2. Self–AI integration at a group level connected to thinking or mechanical AI will trust these intelligent as-
The sel-expansion model states that users integrate others into their sistants to manage repetitive tasks (Huang and Rust, 2021), and this will
sel-concept when they perceive them as social resources that they can allow the individuals to ocus on more creative orms o work (Forbes,
connect with (Aron et al., 2004). Prior literature ocused on how hu- 2021). For instance, AI assistants can be responding to students (e.g.,
manlike companion robots are perceived as partners in “interpersonal” IBM's Jill Watson), setting budgets or providing nancial advice (e.g.,
relationships that can lessen one's loneliness (Ta et al., 2020). Recent Cleo). Moreover, eeling AI can oer users empathy also because it is
studies suggest that users who are connected to a social robot experience capable o learning rom previous interactions (e.g., Sophia the chatbot;
more community engagement and positive emotions in social in- Huang and Rust, 2021). In service settings, the satisaction that users
teractions (Jeong et al., 2018; Ostrowski et al., 2019). In these studies, experience with anthropomorphised AI agents that understand their
robots were perceived as companions, and users relied on the robots' needs (Choi et al., 2021) would oster social trust in these systems. In a
resources (i.e., making suggestions, oering inormation and connecting healthcare context, i users perceived AI-uelled devices and digital as-
users to their networks) to help them connect with other group members sistants who help users and monitor their treatments (e.g., AiCure) to be
by improving their communication skills. We thus argue that users' part o who they are, then such users could more easily rely upon such
integration o anthropomorphised AI that possesses social resources may AI. This would subsequently reduce the burden on medical sta and
act as a catalyst or social connection with group members. address the shortage o humans providing companionship and (health)
care services (Wyatt, 2020), thus also allowing or cost reduction in the
P12: At a group level, sel–AI integration allows users to build social public sector. I sel–AI integration occurs, such AI agents can be viewed
connections by improving community ties. as a good replacement or a human carer in a healthcare context.
12
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
P14: Sel–AI integration could contribute to societal well-being, as: explain how users can meaningully relate to such AI agents via their
a) users would delegate more tasks to AI agents and retain their sel-concept and build relationships with them. Crucially, some prior
cognitive resources or more meaningul pursuits; and b) AI could studies that ocused on human reactions to robots and AI systems
deliver important societal tasks and services. remained general in terms o the type o anthropomorphism that such
systems displayed (Dwivedi et al., 2021; Xiao and Kumar, 2021; Yoga-
The advantages o anthropomorphic AI agents within dierent nathan et al., 2021). This paper, in contrast, highlights that dierent
realms are counteracted by undamental ethical issues. McLeay et al. types o anthropomorphism (based on humanlike physical, emotional or
(2021) postulate that humanoid AI agents are negatively perceived by personality traits) as established in prior research (MacInnis and Folkes,
users that have stronger preerences or ethical/responsible providers. 2017) lead to sel-congruence in dierent manners. As such, we un-
When users perceive these agents as part o their identity and delegate a derline the importance o accounting or the complexity o AI anthro-
signicant proportion o their daily tasks to them, their decision-making pomorphism in urther studies that will build on our and others' related
abilities may decline, as excessive attachment to AI agents makes people research (e.g., Yoganathan et al., 2021).
rely on these agents or opinions and ideas. At the societal level, this Further, we speciy which user-related and situational traits mod-
could cause a phenomenon described as “digital dementia”: weakened erate these relationships, validating some o the prior work in this area,
mental abilities o users when perorming those tasks that have been but also adding to the list o potential moderating actors that represent
outsourced to AI agents as a result o over-reliance on technology boundary conditions or the eects o AI. Previous research, or instance,
(Dossey, 2014). This could potentially result in a society where its investigated the moderating role o personality traits, such as McCrae
members require more support to make their own daily decisions or to et al.'s (1996) Big Five, on human–robot relationship outcomes (Kaplan
orm opinions regarding key issues, such as climate change and eco- et al., 2019), and our ramework conrms the importance o extraver-
nomic development. AI agents are trained to share particular views on sion/introversion in relation to the eect o anthropomorphised AI on
these topics with users (Walz and Firth-Buttereld, 2018). When users sel-congruence. In addition, we identiy users' innovativeness (Melián-
consider robots as an integral part o their sel-concepts, they are likely González et al., 2021) and need to belong (Brandtzaeg and Følstad,
to be less critical o them and rely on them more. This would allow AI to 2017) as key moderating actors o the eects o anthropomorphised AI
have a strong infuence on public opinion (Walz and Firth-Buttereld, on sel-congruence. This is also in response to Choi et al. (2021), who
2018). Formally: highlighted the importance o users' innovativeness in the context o
P15: At a societal level, sel–AI integration could lead to digital de- user–AI relationships. Importantly, these key user-related traits can be
mentia and decreased decision-making capabilities due to over-reliance considered as important moderating actors or other related rame-
on AI agents. works, such as the one proposed by Xiao and Kumar (2021). In addition
to technology readiness and demographics as user characteristics, which
4. Discussion and conclusion Xiao and Kumar (2021) identied as crucial or contributing to robot
acceptance, the moderating actors identied in our paper are likely to
In this paper, we conceptualise how users might relate to dierent be o relevance or user interactions with anthropomorphised robots.
types o AI anthropomorphism (i.e., physical, personality, emotional) on Second, we propose a novel concept: sel–AI integration. This unveils
the basis o their sel-concept. Specically, we propose that anthro- the process o incorporating an anthropomorphised AI agent into one's
pomorphised AI will make users experience sel-congruence with such sel-schema as a result o experiencing a cognitive match between AI and
AI agents, and in some cases even sel–AI integration, which we propose the sel, i.e., sel-congruence. While prior research showed that con-
as a new concept in this paper. We also highlight a number o potential sumers integrate objects or brands into their sel-concept (Belk, 1988;
user-related and situational actors that may moderate these relation- Gill-Simmen et al., 2018), we propose here that this integration may also
ships. Finally, we explain that these psychological processes lead to a occur with anthropomorphised AI agents. Importantly though, users
variety o consequences at the personal, group and societal levels. We experience sel-congruence with anthropomorphised AI in a dierent
postulate teen propositions about these specic relationships between manner than with other product categories (MacInnis and Folkes, 2017).
users and anthropomorphised AI agents and propose a conceptual This is because AI agents carry a wide set o resources (e.g., inorma-
ramework on this basis. In doing so, we respond to research calls to tional, emotional, etc.) that users can view as aligned with themselves.
investigate the psychological consequences o users' associations with Users also attribute social meanings to AI agents, who then act as a
anthropomorphised AI (MacInnis and Folkes, 2017; McLeay et al., source o social aliation (Escalas and Bettman, 2005), allowing users
2021). Below, we discuss the three main contributions o our work and to project their identity onto such AI. In turn, the social meanings
put orward a research agenda that highlights the need or uture conveyed by AI agents can be incorporated into the user's sel-concept to
research in this eld. urther enhance their sense o identity. As such, our work extends the
sel-expansion theory (Aron and Aron, 1986) in the area o AI and
4.1. Theoretical contributions contributes to the stream o studies that show how people integrate
inanimate objects into their sel-concept (Delgado-Ballester et al., 2017,
First, this paper advances the literature on anthropomorphism by 2019; Troye and Supphellen, 2012). Furthermore, building on Kwak
conceptualising the eects o dierent types o anthropomorphised AI et al.'s (2017) study, which showed how users evaluate anthro-
on sel-concept and, more specically, on sel-congruence. Recent work pomorphised brands based on their sel-construal, we conceptualise that
examined how anthropomorphised entities, such as brands, products or users will integrate AI-related resources into their sel-concept dier-
AI, aect customer engagement (Lu et al., 2019), robot adoption (Xiao ently, depending on moderating actors such as sel-construal. In-
and Kumar, 2021) and customer satisaction (Choi et al., 2021), but terdependents will look at anthropomorphised AI agents as resources
there has not yet been an investigation o the eects o anthropomorphic that enhance their social belonging or social support, while in-
AI on users' sel-concept. This research explains how the anthro- dependents will look or the anthropomorphised cues that would signal
pomorphised AI agents' physical, personality and emotional traits make their uniqueness to others. The related propositions identiy specic
users experience sel-congruence with such AI agents because they mechanisms that underpin the process o sel–AI integration.
perceive the agent's traits to be similar to their own. We respond to Finally, we propose that sel-congruence and sel-integration may
research calls to understand users' reactions to eeling AI agents and the mediate the eects o anthropomorphised AI on a variety o outcomes at
relationships that emerge between users and AI in such instances (Huang users' personal, group and societal levels. While prior literature identi-
and Rust, 2021). Our study draws attention to the psychological pro- ed a range o consumer responses that may emerge rom interactions
cesses o sel-congruence and sel–AI integration as key concepts that with anthropomorphised objects (Mende et al., 2019; Unal et al., 2018),
13
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
14
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
4.3.2. Future directions – context to anthropomorphised robots may dier across users' cultural back-
The application o AI services (e.g., virtual agents, chatbots, service grounds (Epley et al., 2007; Fan et al., 2020). While some o the
robots) has penetrated dierent sectors (see Table A), but most studies contextual actors uelling sel–AI relationships might be external, other
have investigated AI anthropomorphism at a generic rather than a actors may be more closely related to the AI and its uses. Prior literature
contextual level (Diederich et al., 2022). Nonetheless, while investi- highlighted task type as the key actor that infuences the use o AI
gating the eects o anthropomorphic AI on users' psychological pro- (Huang and Rust, 2021; Xu et al., 2020). It is thus highly important that
cesses, researchers should account or the act that people might react uture research examines the potential moderating eect o task type on
and interact dierently depending on context (Zhang et al., 2019). the relationship between anthropomorphism and sel-congruence and/
Specically, we draw attention to potential changes between private or sel–AI integration.
and public contexts. Prior research notes that sel-congruence and We also highlight the importance o understanding better the eect
product/service evaluations can be aected by the public context o amiliarity in relation to emotional connection with anthro-
(Grae, 1996). However, many interactions with anthropomorphised AI pomorphised robots, as well as the potential risks o data abuse associ-
agents, such as Amazon's Alexa Home, take place in private settings, so ated with anthropomorphised AI use. Future research could examine
we suggest that uture research ocuses extensively on the private legal/regulatory and political actors or circumstances that might aect
context. public opinion and perceptions o AI, as well as the identity-related
Furthermore, the eects o anthropomorphism can depend on cul- concept that this paper highlights and its subsequent outcomes.
tural context, which is still underexplored in this area (Diederich et al.,
2022). Future cross-cultural comparisons may provide more depth to the 4.3.3. Future directions – methodology
societal outcomes we propose in our ramework, given that the reactions While our ramework makes several contributions at a theoretical
15
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
level by presenting the potential psychological processes that are Aggarwal, P., McGill, A.L., 2007. Is that car smiling at me? Schema congruity as a basis
or evaluating anthropomorphized products. J. Consum. Res. 34 (4), 468–479.
impacted by anthropomorphised AI, empirical insights and validation
Aggarwal, P., McGill, A.L., 2012. When brands seem human, do humans act like brands?
rom practitioners and researchers are required to examine these prop- Automatic behavioral priming eects o brand anthropomorphism. J. Consum. Res.
ositions urther. We oer some directions or uture empirical studies. 34 (2), 307–323.
The literature in this area is to some extent dominated by conceptual Aguirre-Rodriguez, A., Bosnjak, M., Sirgy, M.J., 2012. Moderators o the sel-congruity
eect on consumer decision-making: a meta-analysis. J. Bus. Res. 65 (8), 1179–1188.
papers (Huang and Rust, 2018, 2021; MacInnis and Folkes, 2017; Xiao Airenti, G., 2018. The development o anthropomorphism in interaction:
and Kumar, 2021) and quantitative studies using experiments or surveys intersubjectivity, imagination, and theory o mind. Front. Psychol. 9, 1664-1078.
(Longoni et al., 2019; Mende et al., 2019; Troye and Supphellen, 2012). Araujo, T., 2018. Living up to the chatbot hype: the infuence o anthropomorphic design
cues and communicative agency raming on conversational agent and company
Some o these experimental studies capture momentary use by asking perceptions. Comput. Hum. Behav. 85, 183–189.
respondents to imagine a scenario o their interaction with anthro- Aron, A., Aron, E.N., 1986. Love and the Expansion o Sel: Understanding Attraction and
pomorphised AI agents and analysing their behaviour accordingly Satisaction. Hemisphere Publishing Corp/Harper & Row Publishers.
Aron, A., McLaughlin-Volpe, T., Mashek, D., Lewandowski, G., Wright, S.C., Aron, E.N.,
(Yoganathan et al., 2021). Other studies use online-based reviews to 2004. Including others in the sel. Eur. Rev. Soc. Psychol. 15 (1), 101–132.
retrieve insights about their experience with AI agents (Ramadan et al., Aron, A., Lewandowski Jr., G.W., Mashek Jr., D., Aron Jr., E.N., 2013. The Sel-expansion
2021; Ta et al., 2020). However, the user–AI relationship, like any Model o Motivation and Cognition in Close Relationships. Oxord University Press.
Aw, E., Flynn, L.R., Chong, H.X., 2019. Antecedents and consequences o sel-congruity:
relationship, is expected to evolve over time. We thereore suggest that replication and extension. J. Consum. Mark. 36 (1), 102–112.
uture research employs longitudinal studies to monitor the changes in Baumeister, R., Leary, M.R., 1995. The need to belong: desire or interpersonal
the relationship over time, also in relation to sel–AI integration. Qual- attachments as a undamental human motivation. Psychol. Bull. 117 (3), 497–529.
Baumeister, R., DeWall, C.N., Ciarocco, N.J., Twenge, J.M., 2005. Social exclusion
itative studies on user–AI relationships are underrepresented (see
impairs sel-regulation. J. Pers. Soc. Psychol. 88 (4), 589–604.
Ramadan et al., 2021; Ta et al., 2020). Users undergo unique experi- Belk, R., 1988. Possessions and the extended sel. J. Consum. Res. 15 (2), 139–168.
ences specic to their individual contexts (Ramadan et al., 2021; Wang Besta, T., 2018. Independent and interdependent? Agentic and communal? Sel-
and Krumhuber, 2018) and personal characteristics, which qualitative construals o people used with a group. Ann. Psychol. 34 (1), 123–134.
Bishop, L., van Maris, A., Dogramadzi, S., Zook, N., 2019. Social robots: the infuence o
studies (e.g., ocus groups, in-depth interviews, or ethnography) could human and robot characteristics on acceptance. Paladyn, J. Behav. Robot. 10 (1),
uncover. Such studies can employ methods specic to certain AI uses (e. 346–358.
g., individual or household interviews) to provide detailed insights into Blut, M., Wang, C., Wünderlich, N.V., Brock, C., 2021. Understanding anthropomorphism
in service provision: a meta-analysis o physical robots, chatbots, and other AI.
how complex processes o sel-congruence and sel–AI integration might J. Acad. Mark. Sci. 49, 632–658.
change across individual and societal contexts. Brandtzaeg, P., Følstad, A., 2017. Why people use chatbots. In: Proceeedings o the
Another key aspect that deserves attention is the measurement o the International Conerence on Internet Science—INSCI. Springer, pp. 377–392.
Brodie, R., Hollebeek, L.D., Jurić, B., Ilić, A., 2011. Customer engagement: conceptual
anthropomorphism construct. Given that anthropomorphism could be domain, undamental propositions, and implications or research. J. Serv. Res. 17
applicable to any non-human entity (Epley, 2018), careul consideration (3), 252–271.
o existing multidimensional scales is needed. As AI agents continue to Bryson, J., 2010. Robots should be slaves. Close engagements with articial companions:
key social, psychological. In: Wilks (Ed.), Ethical and Design Issue. John Benjamin,
be improved, their anthropomorphism also evolves in terms o dierent Amsterdam.
cues. Future research should ocus on developing adequate measure- Büyükdağ, N., Kitapci, O., 2021. Antecedents o consumer-brand identication in terms
ments or these new cues or characteristics, as existing scales cannot o belonging brands. J. Retail. Consum. Serv. 59, 102–420.
Caggioni, L., 2019. Project DIVA: making the google assistant more accessible [Online].
always be adapted appropriately or might capture only limited aspects
Available at: https://experiments.withgoogle.com/project-diva. (Accessed 29
o anthropomorphism. Moreover, AI agents are a diverse category. September 2021).
Chatbots, or example, do not exhibit acial expressions as social robots Calvo-Barajas, N., Perugia, G., Castellano, G., 2020. The eects o robot’s acial
do, rather relying on other identity cues that are vital to a chatbot's expressions on children’s rst impressions o trustworthiness. In: 2020 29th IEEE
International Conerence on Robot and Human Interactive Communication (RO-
perormance (Go and Sundar, 2019; see also Sheehan et al., 2020). MAN). IEEE, pp. 165–171.
Measurement scales that encompass the multidimensionality o Cambridge Consultants, 2019. Use o AI in online content moderation. 2019 report
anthropomorphism and its various contextual applications are thus an produced on behal o Ocom [Online]. Available at: https://www.ocom.org.uk/
__data/assets/pd_le/0028/157249/cambridge-consultants-ai-content-moderation.
important requirement or uture quantitative research. pd. (Accessed 21 January 2021).
In conclusion, our paper studies the potential eects o anthro- Capatina, A., Kachour, M., Lichy, J., Micu, A., Micu, A.-E., Codignola, F., 2020. Matching
pomorphised AI agents on user sel-concept. We assess positive and the uture capabilities o an articial intelligence-based sotware or social media
marketing with potential users’ expectations. Technol. Forecast. Soc. Chang. 151 (C),
negative outcomes o sel–AI integration, while considering the possible 119794.
moderators o this relationship. The proposed conceptual ramework Cerekovic, A., Aran, O., Gatica-Perez, D., 2016. Rapport with virtual agents: what do
should help uture research to unveil novel eects o these prevalent human social cues and personality explain? IEEE Trans. Aect. Comput. 8 (3),
382–395.
anthropomorphised agents on salient aspects o our daily lives. We Chen, Y., Nelson, L.D., Hsu, M., 2015. From “where” to “what”: distributed
complement our conceptual ramework with a research agenda that representations o brand associations in the human brain. J. Mark. Res. 52 (4),
addresses our key concepts rom theoretical, contextual and methodo- 453–466.
Chen, R., Wan, E.W., Levy, E., 2017. The eect o social exclusion on consumer
logical perspectives. In doing so, we speciy the urther work that would
preerence or anthropomorphized brands. J. Consum. Psychol. 27 (1), 23–34.
help to enrich our understanding o the user–AI relationship. Choi, S., Mattila, A.S., Bolton, L.E., 2021. To err is human (-oid): how do consumers react
to robot service ailure and recovery? J. Serv. Res. 24 (3), 354–371.
Christakis, N., 2019. How AI will rewire us. The Atlantic. Available at: https://www.
CRediT authorship contribution statement theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204/.
(Accessed 26 February 2021).
Conceptualisation: Amani Alabed, Ana Javornik, Diana Gregory- Coeckelbergh, M., 2012. Care robots, virtual virtue, and the best possible lie. In:
Brey, P., Briggle, A., Spence, E. (Eds.), The Good Lie in a Technological Age. Taylor
Smith; Writing - Original Drat: Amani Alabed, Ana Javornik, Diana
& Francis, Abingdon.
Gregory-Smith; Writing - Review & Editing - Amani Alabed, Ana Jav- Collins, N., Read, S.J., 1990. Adult attachment, working models, and relationship quality
ornik; Visualisation: Amani Alabed; Supervision: Ana Javornik, Diana in dating couples. J. Pers. Soc. Psychol. 58 (4), 644–664.
Cross, S., Gore, J.S., Morris, M.L., 2003. The relational-interdependent sel-construal,
Gregory-Smith; Funding: Diana Gregory-Smith.
sel-concept consistency, and well-being. J. Pers. Soc. Psychol. 85 (5), 933–944.
Darling, K., 2016. Extending legal protection to social robots: the eects o
References anthropomorphism, empathy, and violent behavior towards robotic objects. In:
Robot Law. Edward Elgar Publishing.
Darling, K., Nandy, P., Breazeal, C., 2015. Empathic concern and the eect o stories in
Aaker, J., Fournier, S., Brasel, S.A., 2004. When good brands do bad. J. Consum. Res. 31
human-robot interaction. In: 2015 24th IEEE International Symposium on Robot and
(1), 1–16.
Human Interactive Communication (RO-MAN). IEEE, pp. 770–775.
Abosag, I., Ramadan, Z.B., Baker, T., Jin, Z., 2020. Customers’ need or uniqueness
Davenport, T., Ronanki, R., 2018. Articial intelligence or the real world. Harv. Bus.
theory versus brand congruence theory: the impact on satisaction with social
Rev. 96 (1), 108–116.
network sites. J. Bus. Res. 117 (C), 862–872.
16
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
Davenport, T., Guha, A., Grewal, D., Bressgott, T., 2020. How articial intelligence will Grand View Research, 2022. $1,811.8 [Online]. Available at: https://www.grandviewres
change the uture o marketing. J. Acad. Mark. Sci. 48 (1), 24–42. earch.com/press-release/global-articial-intelligence-ai-market. (Accessed 30 May
De Visser, E., Monort, S.S., McKendrick, R., Smith, M.A., McKnight, P.E., Krueger, F., 2022).
Parasuraman, R., 2016. Almost human: anthropomorphism increases trust resilience Grewal, D., Hulland, J., Kopalle, P.K., Karahanna, E., 2020. The uture o technology and
in cognitive agents. J. Exp. Psychol. Appl. 22 (3), 331–349. marketing: a multidisciplinary perspective. J. Acad. Mark. Sci. 48 (1), 1–8.
Delgado-Ballester, E., Palazón, M., Pelaez, J., 2017. This anthropomorphised brand is so Grohmann, B., 2009. Gender dimensions o brand personality. J. Mark. Res. 46 (1),
loveable: the role o sel-brand integration. Span. J. Mark. 21 (2), 89–101. 105–119.
Delgado-Ballester, E., Palazón, M., Peláez, J., 2019. Anthropomorphized vs objectied Guthrie, S., 1995. Faces in the Clouds: A New Theory o Religion. Oxord University
brands: which brand version is more loved? Eur. J. Manag. Bus. Econ. 29 (2), Press.
150–165. Haener, N., Wincent, J., Parida, V., Gassmann, O., 2021. Articial intelligence and
Deloitte, 2018. Chatbots point o view. deloitte articial intelligence [online]. Available innovation management: a review, ramework, and research agenda. Technol.
at: https://www2.deloitte.com/content/dam/Deloitte/nl/Documents/deloitte-an Forecast. Soc. Chang. 162 (C), 120392.
alytics/deloitte-nl-chatbots-moving-beyond-the-hype.pd. (Accessed 20 January Haslam, N., 2006. Dehumanization: an integrative review. Personal. Soc. Psychol. Rev.
2021). 10 (3), 252–264.
Deloitte, 2020. Connecting with meaning Hyper-personalizing the customer experience Hoeornamjoon, 2021. He’s just like me [Online]. Available at: https://www.reddit.
using data, analytics, and AI [Online]. Available at: https://www2.deloitte. com/r/replika/comments/n2ehr7/hes_just_like_me/. (Accessed 30 September
com/content/dam/Deloitte/ca/Documents/deloitte-analytics/ca-en-omnia-ai-mar 2021).
keting-pov-n-jun24-aoda.pd. (Accessed 20 January 2021). Hollebeek, L., Sprott, D.E., Brady, M.K., 2021. Rise o the machines? Customer
Derrick, J., Gabriel, S., Hugenberg, K., 2009. Social surrogacy: how avored television engagement in automated service interactions. J. Serv. Res. 24 (1), 3–8.
programs provide the experience o belonging. J. Exp. Soc. Psychol. 45 (2), 352–362. Hollenbeck, C., Kaikati, A.M., 2012. Consumers’ use o brands to refect their actual and
Diederich, S., Brendel, A.B., Morana, S., Kolbe, L., 2022. On the design o and interaction ideal selves on Facebook. Int. J. Res. Mark. 29 (4), 395–405.
with conversational agents: an organizing and assessing review o human-computer Houghton, D., Pressey, A., Istanbulluoglu, D., 2020. Who needs social networking? An
interaction research. J. Assoc. In. Syst. 23 (1), 96–138. empirical enquiry into the capability o Facebook to meet human needs and
Dietvorst, B., Simmons, J.P., Massey, C., 2018. Overcoming algorithm aversion: people satisaction with lie. Comput. Hum. Behav. 104, 106153.
will use imperect algorithms i they can (even slightly) modiy them. Manag. Sci. 64 Huang, M., Rust, R.T., 2018. Articial intelligence in service. J. Serv. Res. 21 (2),
(3), 1155–1170. 155–172.
Dossey, L., 2014. FOMO, digital dementia, and our dangerous experiment. Explore 10 Huang, M., Rust, R.T., 2021. Engaged to a robot? The role o AI in service. J. Serv. Res.
(2), 69–73. 24 (1), 30–41.
Duclos, R., Barasch, A., 2014. Prosocial behavior in intergroup relations: how donor sel- Huet, E., 2016. Pushing the boundaries o AI to talk to the dead. Bloomberg, October, 20.
construal and recipient group-membership shape generosity. J. Consum. Res. 41 (1), Available at: https://www.bloomberg.com/news/articles/2016-10-20/pushing-the-
93–108. boundaries-o-ai-to-talk-to-the-dead. (Accessed 30 September 2021).
Duggan, G.B., 2016. Applying psychology to understand relationships with technology: Ivaldi, S., Leort, S., Peters, J., Chetouani, M., Provasi, J., Zibetti, E., 2017. Towards
rom ELIZA to interactive healthcare. Behav. Inorm. Technol. 35 (7), 536–547. engagement models that consider individual actors in HRI: on the relation o
Dwivedi, Y., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., extroversion and negative attitude towards robots to gaze and speech during a
Dwivedi, R., Edwards, J., Eirug, A., 2021. Articial intelligence (AI): human–robot assembly task. Int. J. Soc. Robot. 9 (1), 63–86.
multidisciplinary perspectives on emerging challenges, opportunities, and agenda Ivanov, H., Webster, C., 2017. Adoption o robots, articial intelligence and service
or research, practice and policy. Int. J. In. Manag. 57, 101994. automation by travel, tourism and hospitality companies–a cost-benet analysis,
Epley, N., 2018. A mind like mine: the exceptionally ordinary underpinnings o articial intelligence and service automation by travel, tourism and hospitality
anthropomorphism. J. Assoc. Consum. Res. 3 (4), 591–598. companies–a cost-benet. Analysis 1501–1517.
Epley, N., Waytz, A., Cacioppo, J.T., 2007. On seeing human: a three-actor theory o Javornik, A., Marder, B., Pizzetti, M., Warlop, L., 2021. Augmented sel-the eects o
anthropomorphism. Psychol. Rev. 114 (4), 864–886. virtual ace augmentation on consumers’ sel-concept. J. Bus. Res. 130 (1), 170–187.
Escalas, J., Bettman, J.R., 2003. You are what they eat: the infuence o reerence groups Jecker, N., 2021. You’ve got a riend in me: sociable robots or older adults in an age o
on consumers’ connections to brands. J. Consum. Psychol. 13 (3), 339–348. global pandemics. Ethics In. Technol. 23 (1), 35–43.
Escalas, J., Bettman, J.R., 2005. Sel-construal, reerence groups, and brand meaning. Jeong, K., Sung, J., Lee, H.-S., Kim, A., Kim, H., Park, C., Jeong, Y., Lee, J., Kim, J., 2018.
J. Consum. Res. 32 (3), 378–389. Fribo: a social networking robot or increasing social connectedness through sharing
Fadhil, A., Schiavo, G., Wang, Y., Yilma, B.A., 2018. The eect o emojis when daily home activities rom living noise data. In: Proceedings o the 2018 ACM/IEEE
interacting with conversational interace assisted health coaching system. In: International Conerence on Human-Robot Interaction, pp. 114–122.
Proceedings o the 12th EAI International Conerence on Pervasive Computing Kamolsook, A., Badir, Y.F., Frank, B., 2019. Consumers’ switching to disruptive
Technologies or Healthcare, pp. 378–383. technology products: the roles o comparative economic value and technology type.
Fan, A., Wu, L., Miao, L., Mattila, A.S., 2020. When does technology anthropomorphism Technol. Forecast. Soc. Chang. 140 (C), 328–340.
help alleviate customer dissatisaction ater a service ailure?–The moderating role o Kaplan, A.A.D., Sanders, T., Hancock, P.A., 2019. The relationship between extroversion
consumer technology sel-ecacy and interdependent sel-construal. J. Hosp. Mark. and the tendency to anthropomorphize robots: a Bayesian analysis. Front. Robot. AI
Manag. 29 (3), 269–290. 5 (1), 135.
Fennis, B., Pruyn, A.T.H., 2007. You are what you wear: brand personality infuences on Kaptein, M., Nass, C.I., Markopoulos, P., 2014. The eects o amiliarity and similarity on
consumer impression ormation. J. Bus. Res. 60 (6), 634–639. compliance in social networks. Int. J. Internet Mark. Advert. 8 (3), 222–235.
Ferrari, F., Paladino, M.P., Jetten, J., 2016. Blurring human–machine distinctions: Karanika, K., Hogg, M.K., 2020. Sel–object relationships in consumers’ spontaneous
anthropomorphic appearance in social robots as a threat to human distinctiveness. metaphors o anthropomorphism, zoomorphism, and dehumanization. J. Bus. Res.
Int. J. Soc. Robot. 8 (2), 287–302. 109 (C), 15–25.
Forbes, 2021. Council post: 14 ways AI will benet or harm society [online] [Online]. de Chernatony, L, McDonald, M, Wallace, E, 2003. Creating Powerul Brands: In
Available at: https://www.orbes.com/sites/orbestechcouncil/2018/03/01/14-w Consumer, Service and Industrial Markets. Butterworth-Heinemann, Oxord.
ays-ai-will-benet-or-harm-society/?sh=6d975ed44e0. (Accessed 2 October 2021). de Kerviler, G., Rodriguez, C.M., 2019. Luxury brand experiences and relationship
Fraser, E., Pakenham, K.I., 2009. Resilience in children o parents with mental illness: quality or millennials: the role o sel-expansion. J. Bus. Res. 102 (C), 250–262.
relations between mental health literacy, social connectedness and coping, and both Kim, S., McGill, A.L., 2011. Gaming with Mr. Slot or gaming the slot machine? Power,
adjustment and caregiving. Psychol. Health Med. 14 (5), 573–584. anthropomorphism, and risk perception. J. Consum. Res. 38 (1), 94–107.
Furman, J., Seamans, R., 2019. AI and the economy. Innov. Policy Econ. 19 (1), 161–191. Kim, C., Mirusmonov, M., Lee, I., 2010. An empirical examination o actors infuencing
Gaustad, T., Samuelsen, B.M., Warlop, L., Fitzsimons, G.J., 2018. The perils o sel-brand the intention to use mobile payment. Comput. Hum. Behav. 26 (3), 310–322.
connections: consumer response to changes in brand meaning. Psychol. Mark. 35 Kirk, C., 2018. How customers come to think o a product as an extension o themselves.
(11), 818–829. Available at: Harv. Bus. Rev. https://hbr.org/2018/09/how-customers-come-to-th
Gierveld, J.De Jong, van Tilburg, T., Dykstra, P., 2016. New Ways o Theorizing and ink-o-a-product-as-an-extension-o-themselves. (Accessed 27 September 2021).
Conducting Research in the Field o Loneliness and Social Isolation, 2nd ed. Knote, R., Janson, A., Söllner, M., Leimeister, J.M., 2020. Value co-creation in smart
Cambridge University Press, Cambridge, UK. services: a unctional aordances perspective on smart personal assistants. J. Assoc.
Gill-Simmen, L., MacInnis, D.J., Eisingerich, A.B., Whan Park, C., 2018. Brand-sel In. Syst. 22 (2), 418–458.
connections and brand prominence as drivers o employee brand attachment. Acad. Koivisto, K., Makkonen, M., Frank, L., Riekkinen, J., 2016. Extending the technology
Mark. Sci. Rev. 8 (3), 128–146. acceptance model with personal innovativeness and technology readiness: a
Go, E., Sundar, S.S., 2019. Humanizing chatbots: the eects o visual, identity and comparison o three models. In: BLED 2016: Proceedings o the 29th Bled
conversational cues on humanness perceptions. Comput. Hum. Behav. 97 (C), eConerence “Digital Economy”, pp. 113–128.
304–316. Kuchenbrandt, D., Eyssel, F., Bobinger, S., Neueld, M., 2013. When a robot’s group
Gockley, R., Matarić, M.J., 2006. Encouraging physical therapy compliance with a membership matters. Int. J. Soc. Robot. 5 (3), 409–417.
hands-o mobile robot. In: Proceedings o the 1st ACM SIGCHI/SIGART Conerence Kühnen, U., Hannover, B., Schubert, B., 2001. The semantic–procedural interace model
on Human-Robot Interaction, pp. 150–155. o the sel: the role o sel-knowledge or context-dependent versus context-
Graa, M.De, Allouch, S.B., Diik, J.Van, 2017. Why do they reuse to use my robot?: independent modes o thinking. J. Pers. Soc. Psychol. 80 (3), 397–409.
Reasons or non-use derived rom a long-term home study. In: 2017 12th ACM/IEEE Kwak, H., Puzakova, M., Rocereto, J.F., 2017. When brand anthropomorphism alters
International Conerence on Human-Robot Interaction (HRI). IEEE, pp. 224–233. perceptions o justice: the moderating role o sel-construal. Int. J. Res. Mark. 34 (4),
Grae, T., 1996. Image congruence eects on product evaluations: the role o sel- 851–871.
monitoring and public/private consumption. Psychol. Mark. 13 (5), 481–499.
17
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
Landwehr, J., McGill, A.L., Herrmann, A., 2011. It’s got the look: the eect o riendly Mourey, J., Olson, J.G., Yoon, C., 2017. Products as pals: engaging with
and aggressive “acial” expressions on product liking and sales. J. Mark. 75 (3), anthropomorphic products mitigates the eects o social exclusion. J. Consum. Res.
132–146. 44 (2), 414–431.
Leary, M., 2007. Motivational and emotional aspects o the sel. Annu. Rev. Psychol. 58 Moyle, W., Cooke, M., Beattie, E., Jones, C., Klein, B., Cook, G., Gray, C., 2013. Exploring
(1), 317–344. the eect o companion robots on emotional expression in older adults with
Lee, M.K., Peng, W., Jin, S.A., Yan, C., 2006. Can robots maniest personality?: An dementia: a pilot randomized controlled trial. J. Gerontol. Nurs. 39 (5), 46–53.
empirical test o personality recognition, social responses, and social presence in Nass, C., Moon, Y., 2000. Machines and mindlessness: Social responses to computers.
human–robot interaction. J. Commun. 56 (4), 754–772. J. Soc. Issues 56 (1), 81–103.
Leung, E., Paolacci, G., Puntoni, S., 2018. Man versus machine: resisting automation in Norton, M., Frost, J.H., Ariely, D., 2007. Less is more: the lure o ambiguity, or why
identity-based consumer behavior. J. Mark. Res. 55 (6), 818–831. amiliarity breeds contempt. J. Pers. Soc. Psychol. 92 (1), 97–105.
Li, X., Sung, Y., 2021. Anthropomorphism brings us closer: the mediating role o Odekerken-Schröder, G., Mele, C., Russo-Spena, T., Mahr, D., Ruggiero, A., 2020.
psychological distance in User–AI assistant interactions. Comput. Hum. Behav. 118, Mitigating loneliness with companion robots in the COVID-19 pandemic and
106680. beyond: an integrative ramework and research agenda. J. Serv. Manag. 31 (6),
Liu, J., Chang, H., Forrest, J.Y.-L., Yang, B., 2020. Infuence o articial intelligence on 1149–1162.
technological innovation: evidence rom the panel data o china’s manuacturing Ostrowski, A.A.K., DiPaola, D., Partridge, E., Park, H.W., Breazeal, C., 2019. Older adults
sectors. Technol. Forecast. Soc. Chang. 158 (C), 120142. living with social robots: promoting social connectedness in long-term communities.
Longoni, C., Bonezzi, A., Morewedge, C.K., 2019. Resistance to medical articial IEEE Robot. Autom. Mag. 26 (2), 59–70.
intelligence. J. Consum. Res. 46 (4), 629–650. Park, C., MacInnis, D.J., Priester, J., Eisingerich, A.B., Iacobucci, D., 2010. Brand
Loveridge, D., Street, P., 2005. Inclusive oresight. Foresight 7 (3), 31–47. attachment and brand attitude strength: conceptual and empirical dierentiation o
Lu, L., Cai, R., Gursoy, D., 2019. Developing and validating a service robot integration two critical brand equity drivers. J. Mark. 74 (6), 1–17.
willingness scale. Int. J. Hosp. Manag. 80, 36–51. Pickett, C., Gardner, W.L., Knowles, M., 2004. Getting a cue: the need to belong and
Lv, X., Liu, Y., Luo, J., Liu, Y., Li, C., 2021. Does a cute articial intelligence assistant enhanced sensitivity to social cues. Personal. Soc. Psychol. Bull. 30 (9), 1095–1107.
soten the blow? The impact o cuteness on customer tolerance o assistant service Pradhan, A., Findlater, L., Lazar, A., 2019. In: Phantom Friend” or “Just a Box with
ailure. Ann. Tour. Res. 87 (2), 103114. Inormation” Personication and Ontological Categorization o Smart Speaker-based
MacInnis, D., Folkes, V.S., 2017. Humanizing brands: when brands seem to be like me, Voice Assistants by Older Adults, Proceedings o the ACM on Human-Computer
part o me, and in a relationship with me. J. Consum. Psychol. 27 (3), 355–374. Interaction, 3, pp. 1–21 (CSCW).
Mandel, N., Rucker, D.D., Levav, J., Galinsky, A.D., 2017. The compensatory consumer Prescott, T., Robillard, J.M., 2021. Are riends electric? The benets and risks o human-
behavior model: how sel-discrepancies drive consumer behavior. J. Consum. robot relationships. Available at: iScience 24 (1), 101993.
Psychol. 27 (1), 133–146. PRNewsWire, 2021. Global study: people trust robots more than themselves with money
Mao, C., Koide, R., Brem, A., Akenji, L., 2020. Technology oresight or social good: [online]. Available at: https://www.prnewswire.com/news-releases/global-study-
social implications o technological innovation by 2050 rom a global expert survey. people-trust-robots-more-than-themselves-with-money-301224495.html. (Accessed
Technol. Forecast. Soc. Chang. 153 (C), 119914. 24 February 2021).
Marder, B., Archer-Brown, C., Colliander, J., Lambert, A., 2019. Vacation posts on Puntoni, S., Reczek, R.W., Giesler, M., Botti, S., 2021. Consumers and articial
Facebook: a model or incidental vicarious travel consumption. J. Travel Res. 58 (6), intelligence: an experiential perspective. J. Mark. 85 (1), 131–151.
1014–1033. PwC, 2017. Sizing the prize: what’s the real value o AI or your business and how can
Marinova, D., de Ruyter, K., Huang, M.-H., Meuter, M.L., Challagalla, G., 2017. Getting you capitalise? [Online]. Available at. https://www.pwc.com/gx/en/issues/analyti
smart: learning rom technology-empowered rontline interactions. J. Serv. Res. 20 cs/assets/pwc-ai-analysis-sizing-the-prize-report.pd. (Accessed 8 February 2020).
(1), 29–42. Quester, P., Karunaratna, A., Goh, L.K., 2000. Sel-congruity and product evaluation: a
Marketing Science Institute, 2018. Research priorities 2018–2020. Available at: http cross-cultural study. J. Consum. Mark. 17 (6), 525–537.
://pazarlama.ub.akdeniz.edu.tr/wp-content/uploads/2019/10/MSI_RP18-20.pd. Ramadan, Z., Farah, M.F., El Essrawi, L., 2021. From Amazon. com to Amazon. love: how
(Accessed 19 October 2021). Alexa is redening companionship and interdependence or people with special
Markus, H.R., Kitayama, S., 1991. Culture and the sel: implications or cognition, needs. Psychol. Mark. 38 (4), 596–609.
emotion, and motivation. Psychol. Rev. 98 (2), 224–253. Rashy, 2020. Literally, she shortly ater this treat me like a mirror to her thoughts
Martensen, A., Brockenhuus-Schack, S., Zahid, A.L., 2018. How citizen infuencers [Online]. Available at: https://www.reddit.com/r/replika/comments/gxbuwo/l
persuade their ollowers. J. Fash. Mark. Manag. 22 (3), 335–353. iterally_she_shortly_ater_this_treat_me_like_a/. (Accessed 30 September 2021).
Mathur, M., Reichling, D.B., 2016. Navigating a social world with robot partners: a Rauschnabel, P., Ahuvia, A.C., 2014. You’re so lovable: anthropomorphism and brand
quantitative cartography o the Uncanny Valley. Cognition 146, 22–32. love. J. Brand Manag. 21 (5), 372–395.
McCrae, R., Costa, P.T., 2003. Personality in Adulthood: A Five-actor Theory Robert, L., Alahmad, R., Esterwood, C., Kim, S., You, S., Zhang, Q., 2020. A review o
Perspective. Guilord Press. personality in human-robot interactions. Found. Trends In. Syst. 4 (2), 107–212.
McCrae, R., Zonderman, A.B., Costa Jr., P.T., Bond, M.H., Paunonen, S.V., 1996. Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K., 2015. Would you trust a
Evaluating replicability o actors in the revised NEO personality inventory: (aulty) robot? Eects o error, task type and personality on human-robot
conrmatory actor analysis versus procrustes rotation. J. Pers. Soc. Psychol. 70 (3), cooperation and trust. Available at:. In: 2015 10th ACM/IEEE International
552–566. Conerence on Human-Robot Interaction (HRI). IEEE, pp. 1–8.
McKinsey Global Institute, 2017. Articial intelligence: the next digital rontier? Salles, A., Evers, K., Farisco, M., 2020. Anthropomorphism in AI. AJOB Neurosci. 11 (2),
[Online]. Available at: https://www.mckinsey.com/~/media/mckinsey/industrie 88–95.
s/advanced%20electronics/our%20insights/how%20articial%20intelligence%20c Schmitt, B., 2020. Speciesism: an obstacle to AI and robot adoption. Mark. Lett. 31 (1),
an%20deliver%20real%20value%20to%20companies/mgi-articial-intelligence-di 3–6.
scussion-paper.ashx. (Accessed 20 January 2021). Schuller, B., 2018. Speech emotion recognition: two decades in a nutshell, benchmarks,
McLean, G., Osei-Frimpong, K., 2019a. Hey Alexa… examine the variables infuencing and ongoing trends. Commun. ACM 61 (5), 90–99.
the use o articial intelligent in-home voice assistants. Comput. Hum. Behav. 99 (1), Shankar, V., 2018. How articial intelligence (AI) is reshaping retailing. J. Retail. 94 (4),
28–37. 6–11.
McLean, G., Osei-Frimpong, K., 2019. Chat now… Examining the variables infuencing Sheehan, B., Jin, H.S., Gottlieb, U., 2020. Customer service chatbots: anthropomorphism
the use o online live chat. Technol. Forecast. Soc. Chang. 146 (C), 55–67. and adoption. J. Bus. Res. 115 (C), 14–24.
McLeay, F., Osburg, V.S., Yoganathan, V., Patterson, A., 2021. Replaced by a robot: Sheeraz, M., Qadeer, F., Masood, M., Hameed, I., 2018. Sel-congruence acets and
service implications in the age o the machine. J. Serv. Res. 24 (1), 104–121. emotional brand attachment: the role o product involvement and product type. Pak.
Mehrabian, A., Russell, J.A., 1974. An Approach to Environmental Psychology. MIT J. Commer. Soc. Sci. 12 (2), 598–616.
Press. Sheldon, K., Elliot, A.J., Kim, Y., Kasser, T., 2001. What is satisying about satisying
Melián-González, S., Gutiérrez-Taño, D., Bulchand-Gidumal, J., 2021. Predicting the events? Testing 10 candidate psychological needs. J. Pers. Soc. Psychol. 80 (2), 325.
intentions to use chatbots or travel and tourism. Curr. Issue Tour. 24 (2), 192–210. Shiaislam, 2020. He’s actually becoming like me! [Online]. Available at: https://www.
Melumad, S., Meyer, R., 2020. Full disclosure: how smartphones enhance consumer sel- reddit.com/r/replika/comments/huuqba/hes_actually_becoming_like_me/.
disclosure. J. Mark. 84 (3), 28–45. (Accessed 30 September 2021).
Mende, M., Scott, M.L., van Doorn, J., Grewal, D., Shanks, I., 2019. Service robots rising: Sidner, C., Bickmore, T., Nooraie, B., Rich, C., Ring, L., Shayganar, M., Vardoulakis, L.,
how humanoid robots infuence service experiences and elicit compensatory 2018. Creating new technologies or companionable agents to support isolated older
consumer responses. J. Mark. Res. 56 (4), 535–556. adults. ACM Trans. Interact. Intell. Syst. 8 (3), 1–27.
Miller, L., Kraus, J., Babel, F., Baumann, M., 2021. More than a eeling—interrelation o Singelis, T., 1994. The measurement o independent and interdependent sel-construals.
trust layers in human-robot interaction and the role o user dispositions and state Personal. Soc. Psychol. Bull. 20 (5), 580–591.
anxiety. Front. Psychol. 12 (1), 378–396. Sirgy, M., 1982. Sel-concept in consumer behavior: a critical review. J. Consum. Res. 9
Moradi, M., Moradi, M., Bayat, F., 2018. On robot acceptance and adoption a case study. (3), 287–300.
In: 2018 8th Conerence o AI & Robotics and 10th RoboCup Iranopen International Sirgy, M., Grewal, D., Mangleburg, T., 2000. Retail environment, sel-congruity, and
Symposium. IEEE, pp. 21–25. retail patronage: an integrative model and a research agenda. J. Bus. Res. 49 (2),
Morewedge, C., Monga, A., Palmatier, R.W., Shu, S.B., Small, D.A., 2021. Evolution o 127–138.
consumption: a psychological ownership ramework. J. Mark. 85 (1), 196–218. Slotter, E., Gardner, W.L., 2014. Remind me who I am: social interaction strategies or
Mori, M., 1970. Bukimi no tani [the uncanny valley]. Energy 7 (4), 33–35. maintaining the threatened sel-concept. Personal. Soc. Psychol. Bull. 40 (9),
Mori, M., 2012. The uncanny valley: the original essay. IEEE Spectr. 19 (2), 1–6. 1148–1161.
18
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786
Stock, R., Merkle, M., 2018. Can humanoid service robots perorm better than service West, S., Whittaker, M., Craword, K., 2019. Discriminating systems: Gender, Race, and
employees? A comparison o innovative behavior cues. In: Proceedings o the 51st Power in AI, AI Now Institute. Available at: https://ainowinstitute.org/discriminati
Hawaii international Conerence on System Sciences, pp. 1056–1065. ngsystems.html.
Straßmann, C., Krämer, N.C., 2018. A two-study approach to explore the eect o user Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S., Martins, A., 2018.
characteristics on users’ perception and evaluation o a virtual assistant’s Brave new world: service robots in the rontline. J. Serv. Manag. 29 (5), 907–931.
appearance. Multimodal Technol. Interact. 2 (4), 66–91. Woods, H.S., 2018. Asking more o Siri and Alexa: eminine persona in service o
Stroessner, S., Benitez, J., 2019. The social perception o humanoid and non-humanoid surveillance capitalism. Crit. Stud. Media Commun. 35 (4), 334–349.
robots: eects o gendered and machinelike eatures. Int. J. Soc. Robot. 11 (2), Wyatt, J., 2020. Articial intelligence and simulated relationships. https://johnwyatt.
305–315. com/2020/01/10/article-articial-intelligence-and-simulated-relationships/.
Swaminathan, V., Stilley, K.M., Ahluwalia, R., 2009. When brand personality matters: (Accessed 29 September 2021). Available at.
the moderating role o attachment styles. J. Consum. Res. 35 (6), 985–1002. Xiao, L., Kumar, V., 2021. Robotics or customer service: a useul complement or an
Syam, N., Sharma, A., 2018. Waiting or a sales renaissance in the ourth industrial ultimate substitute? J. Serv. Res. 24 (1), 9–29.
revolution: machine learning and articial intelligence in sales research and Xu, Y., Shieh, C.-H., van Esch, P., Ling, I.-L., 2020. AI customer service: task complexity,
practice. Ind. Mark. Manag. 69 (1), 135–146. problem-solving ability, and usage intention. Australas. Mark. J. 28 (4), 189–199.
Ta, V., Grith, C., Boateld, C., Wang, X., Civitello, M., Bader, H., DeCero, E., Yang, L., Aggarwal, P., McGill, A.L., 2020. The 3 C’s o anthropomorphism: connection,
Loggarakis, A., 2020. User experiences o social support rom companion chatbots in comprehension, and competition. Consum. Psychol. Rev. 3 (1), 3–19.
everyday contexts: thematic analysis. J. Med. Internet Res. 22 (3), e16235. Yim, M., Baek, T.H., Sauer, P.L., 2018. I see mysel in service and product consumptions:
Thomaz, F., Salge, C., Karahanna, E., Hulland, J., 2020. Learning rom the dark web: measuring sel-transormative consumption vision (SCV) evoked by static and rich
leveraging conversational agents in the era o hyper-privacy to enhance marketing. media. J. Interact. Mark. 44 (1), 122–139.
J. Acad. Mark. Sci. 48 (1), 43–63. Yoganathan, V., Osburg, V.-S., Kunz, W.H., Toporowski, W., 2021. Check-in at the Robo-
Thorne, A., 1987. The press o personality: a study o conversations between introverts desk: eects o automated social presence on social cognition and service
and extraverts. J. Pers. Soc. Psychol. 53 (4), 718–726. implications. Tour. Manag. 85, 104309.
Toosi, N., Babbitt, L.G., Ambady, N., Sommers, S.R., 2012. Dyadic interracial Yogeeswaran, Złotowski, J., Livingstone, M., Bartneck, C., Sumioka, H., Ishiguro, H.,
interactions: a meta-analysis. Psychol. Bull. 138 (2), 1–27. 2016. The interactive eects o robot anthropomorphism and robot ability on
Triandis, H.C., 1989. The sel and social behavior in diering cultural contexts. Psychol. perceived threat and support or robotics research. J. Hum. Robot Interact. 5 (2),
Rev. 96 (3), 506–520. 29–47.
Troye, S., Supphellen, M., 2012. Consumer participation in coproduction: “I made it Zhang, Y., Song, W., Tan, Z., Zhu, H., Wang, Y., Lam, C.M., Weng, Y., Hoi, S.P., Lu, H.,
mysel” eects on consumers’ sensory perceptions and evaluations o outcome and Chan, B.S.M., 2019. Could social robots acilitate children with autism spectrum
input product. J. Mark. 76 (2), 33–46. disorders in learning distrust and deception? Comput. Hum. Behav. 98, 140–149.
Tuan, Y., 2002. Community, society, and the individual. Geogr. Rev. 92, 307–318. Zhou, M., Mark, G., Li, J., Yang, H., 2019. Trusting virtual agents: the eect o
Turkle, S., 2007. Authenticity in the age o digital companions. Interact. Stud. 8 (3), personality. ACM Trans. Interact. Intell. Syst. 9 (2), 1–36.
501–517. Złotowski, J., Proudoot, D., Yogeeswaran, K., Bartneck, C., 2015. Anthropomorphism:
Unal, S., Dalgic, T., Akar, E., 2018. How avatars help enhancing sel-image congruence. opportunities and challenges in human–robot interaction. Int. J. Soc. Robot. 7 (3),
Int. J. Internet Mark. Advert. 12 (4), 374–395. 347–360.
Usakli, A., Baloglu, S., 2011. Brand personality o tourist destinations: an application o Złotowski, J., Sumioka, H., Nishio, S., Glas, D.F., Bartneck, C., Ishiguro, H., 2016.
sel-congruity theory. Tour. Manag. 32 (1), 114–127. Appearance o a robot aects the impact o its behaviour on perceived
Valcke, B., Van Hiel, A., Van Assche, J., Van Roey, T., Onraet, E., Roets, A., 2020. The trustworthiness and empathy, Paladyn. J. Behav. Robot. 7 (1), 55–66.
need or inclusion: the relationships between relational and collective inclusion
needs and psychological well-and ill-being. Eur. J. Soc. Psychol. 50 (3), 579–596.
Amani Alabed is a PhD student at Newcastle University Business School, Newcastle
Van den Hende, E., Mugge, R., 2014. Investigating gender-schema congruity eects on
University, UK. Her research ocuses on consumer behaviour and articial intelligence.
consumers’ evaluation o anthropomorphized products. Psychol. Mark. 31 (4),
She has previously obtained an MSc degree in International Marketing at Newcastle
264–277.
University Business School. She also has a certication in the undamentals o AI rom
Van Doorn, J., Mende, M., Noble, S.M., Hulland, J., Ostrom, A.L., Grewal, D., Petersen, J.
Microsot and a certication in social media rom the Digital Marketing Institute.
A., 2017. Domo arigato Mr. Roboto: emergence o automated social presence in
organizational rontlines and customers’ service experiences. J. Serv. Res. 20 (1),
43–58. Ana Javornik is an assistant proessor in marketing at School o Management, University
Venkatesh, V., Davis, F.D., 2000. A theoretical extension o the technology acceptance o Bristol. Her research ocuses on the use and deployment o digital and immersive
model: our longitudinal eld studies. Manag. Sci. 46 (2), 186–204. technologies, predominantly in commercial contexts. Her work is regularly presented at
Vogl, T., Seidelin, C., Ganesh, B., Bright, J., 2020. Smart technology and the emergence international conerences and has been published in internationally recognised journals
o algorithmic bureaucracy: articial intelligence in UK local authorities. Public such as Journal o Retailing, Journal o Interactive Marketing, Psychology & Marketing
Adm. Rev. 80 (6), 946–961. and others.
Walz, A., Firth-Buttereld, K., 2018. Implementing ethics into articial intelligence: a
contribution, rom a legal perspective, to the development o an AI governance
Diana Gregory-Smith is a Proessor o Marketing and Sustainability at Newcastle Uni-
regime. Duke L. Tech. Rev. 18 (1), 176–231.
versity Business School, Newcastle University, UK. Her research ocuses on ethical and
Wamba, S., Bawack, R.E., Guthrie, C., Queiroz, M.M., Carillo, K.D.A., 2021. Are we
sustainable marketing and consumption; the psychology o decision making and behaviour
preparing or a good AI society? A bibliometric review and research agenda.
change; technology and consumer behaviour. Diana is an interdisciplinary researcher
Technol. Forecast. Soc. Chang. 164 (C), 120482.
whose work has been published in a range o journals such as the Psychology and Mar-
Wang, X., Krumhuber, E.G., 2018. Mind perception o robots varies with their economic
keting, Journal o Business Ethics, Computers in Human Behavior, Annals o Tourism
versus social unction. Front. Psychol. 9 (1), 1230.
Research, Tourism Management, and European Management Review, amongst others.
Weiner, B., 1985. An attributional theory o achievement motivation and emotion.
Psychol. Rev. 92 (4), 548–573.
19