AI Anthropomorphism

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Technological Forecasting & Social Change 182 (2022) 121786

Contents lists available at ScienceDirect

Technological Forecasting & Social Change


journal homepage: www.elsevier.com/locate/techfore

AI anthropomorphism and its eect on users' sel-congruence and sel–AI


integration: A theoretical ramework and research agenda
Amani Alabed a, *, Ana Javornik b, Diana Gregory-Smith a
a
Newcastle University Business School, 5 Barrack Rd, Newcastle upon Tyne NE1 4SE, UK
b
School of Management, University of Bristol, Howard House, Queen's Ave, Bristol BS8 1SD, UK

A R T I C L E I N F O A B S T R A C T

Keywords: This paper examines how users o anthropomorphised articially intelligent (AI) agents, which possess capa-
Articial intelligence bilities to mimic humanlike behaviour, relate psychologically to such agents in terms o their sel-concept. The
AI proposed conceptual ramework species dierent levels o anthropomorphism o AI agents and, drawing on
Anthropomorphism
insights rom psychology, marketing and human–computer interaction literature, establishes a conceptual link
Sel-congruence
Sel-integration
between AI anthropomorphism and sel-congruence. The paper then explains how this can lead to sel–AI
Personality traits integration, a novel concept that articulates the process o users integrating AI agents into their sel-concept.
However, these eects can depend on a range o moderating actors, such as consumer traits, situational ac-
tors, sel-construal and social exclusion. Crucially, the conceptual ramework species how these processes can
lead to specic personal-, group- and societal-level consequences, such as emotional connection and digital
dementia. The research agenda proposed on the basis o the conceptual ramework identies key areas o interest
that should be tackled by uture research concerning this important phenomenon.

1. Introduction humanlike AI. For instance, prior work examined to what extent
anthropomorphic agents evoke empathy and trustworthiness (Złotowski
The articial intelligence (AI) industry is expected to reach $1811.8 et al., 2016) or consumers' acceptance o them (Xiao and Kumar, 2021);
billion in revenue (Grand View Research, 2022) and to contribute $15.7 to what extent users engage with such agents (Hollebeek et al., 2021);
trillion to the global economy by 2030 (PwC, 2017). This trend is tightly and how AI aects brand-related responses such as loyalty (Gaustad
linked with a widespread integration o AI across a range o sectors, e.g., et al., 2018; Lu et al., 2019). Moreover, there have been important ad-
education, retail, companionship and entertainment (Furman and Sea- vances with respect to the role o anthropomorphised AI in service
mans, 2019; Liu et al., 2020; McLean and Osei-Frimpong, 2019a), where quality (Yoganathan et al., 2021), service experience (McLeay et al.,
people employ AI or a variety o tasks including speech recognition, 2021) and usage intention (Xu et al., 2020). Yet, to date, extant research
personalised recommendation, problem solving, and data processing has overlooked the relationship between users and anthropomorphised
(Davenport and Ronanki, 2018). A crucial phenomenon to note within agents rom the perspective o users' identity – in other words their sel-
this expansion is the ever-improving anthropomorphism o this tech- concept (Karanika and Hogg, 2020; Sirgy, 1982) – despite explicit calls
nology. AI agents seem progressively more humanlike, not only in terms or such investigation (MacInnis and Folkes, 2017).
o their physical appearance, but also in the way they mimic emotions This knowledge gap is surprising and important or two reasons.
and the personality traits they appear to possess (Aggarwal and McGill, First, sel-concept is one o the key determinants o how users may
2007; Epley, 2018; Zhou et al., 2019). respond to external stimuli (Sirgy, 1982) and how they may engage with
Despite this rapid adoption o anthropomorphic AI in many areas o technology (Marder et al., 2019). Second, any eects on or changes to
human activity, little is understood about how users relate to such AI the sel-concept can have proound eects on an individual's well-being
agents rom the perspective o their own identity. This lack o attention (Cross et al., 2003), consumption habits (Mandel et al., 2017) and social
to the eect on users' sel-concept is a salient research gap in the ongoing interactions (Slotter and Gardner, 2014). There is a breadth o research
examination o anthropomorphism and AI, despite growing research on demonstrating that individuals relate psychologically to technologies

* Corresponding author.
E-mail addresses: [email protected] (A. Alabed), [email protected] (A. Javornik), [email protected] (D. Gregory-Smith).

https://doi.org/10.1016/j.techore.2022.121786
Received 8 March 2021; Received in revised orm 30 May 2022; Accepted 1 June 2022
Available online 15 June 2022
0040-1625/© 2022 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

and non-animate entities (Hollenbeck and Kaikati, 2012), which can in Our research also species implications or managers in the eld o
turn expand their sel-concept (de Kerviler and Rodriguez, 2019) and AI and digital marketing, helping them to understand how anthropo-
momentarily transorm it (Javornik et al., 2021; Yim et al., 2018). But morphic AI agents should be developed to yield meaningul interactions
how do users relate to anthropomorphised AI agents, who might appear (Davenport et al., 2020), while accounting or potential negative con-
and behave like humans and, in some cases, even display humanlike sequences. We conclude with a research agenda outlining uture
personalities? This research seeks to uncover these processes and to research directions in terms o theory, context, and methodology.
respond to related knowledge calls in this area o research. Notably,
MacInnis and Folkes (2017) highlight the need to understand in what 2. Anthropomorphised AI
way consumers might perceive themselves as congruent with anthro-
pomorphised AI. Recently, McLeay et al. (2021) also signposted the need AI exists in dierent ormats and has been applied across a wide
or urther investigation on how consumers perceive the humanness o range o contexts thanks to its capability o operating in an intelligent
robots and how that alters the interactions and associated consequences manner. While Shankar (2018, p. 6) dened AI as “the programs, al-
in service contexts and beyond. Moreover, as called or by Yoganathan gorithms, systems and machines that demonstrate intelligence”, Huang
et al. (2021), we account or individual characteristics and other po- and Rust (2021) oer more specic categorisation o AI, notably as
tential moderators that can signicantly alter interactions with AI and mechanical, thinking and eeling AI. Specically, mechanical AI is used
the underlying psychological processes. to perorm transactional tasks and replace human intelligence. Thinking
We aim to overcome this gap by putting orward a conceptual AI is used to augment human intelligence with utilitarian services such
ramework that species the relationships between anthropomorphised as analytics or diagnostics. Finally, eeling AI can be used or experience-
AI and identity-related processes, namely sel-congruence and sel–AI based and emotional tasks, where AI agents such as chatbots can interact
integration; the latter is a new key concept that we propose in this area. with customers and convey empathy and elements o social interaction
We ormulate teen research propositions that postulate specic re- in customer service (Huang and Rust, 2021). Such eeling AI diers
lationships in the conceptual ramework, which also considers the signicantly rom sel-service technologies with mechanical and
impact o this process at individual, group and societal levels. As such, thinking applications (Wirtz et al., 2018).
we oer several novel contributions. While there is this notable diversity o AI categories and applications,
First, we contribute to the growing body o literature on the role o a key capability across dierent AI categories is that it can mimic
anthropomorphised interaces o AI agents (McLeay et al., 2021) by intelligent human behaviour or traits (Syam and Sharma, 2018) by
highlighting the importance o users' identities in this context (Araujo, relying on technological advances such as machine learning, natural
2018; MacInnis and Folkes, 2017; Marketing Science Institute, 2018). language processing, speech recognition and image recognition
We speciy sel-congruence and sel–AI integration as key concepts that (Davenport et al., 2020). These enable the anthropomorphism o AI,
mediate the eects o anthropomorphism on a range o user responses. which can be conveyed in a variety o eatures. Table A provides an
Thereore, we contribute to prior literature that identied sel- overview o anthropomorphised AI examples across various domains
congruence as an impactul driver o users' responses in both digital (retail, education, gaming, administration, etc.) and species anthro-
and ofine environments (Abosag et al., 2020; Aw et al., 2019). This pomorphised characteristics and their key audiences (B2B or B2C).
aspect is particularly important in a marketing context, where it can Tesla, Birchbox, Stitch Fix and Lowe are some o the many brands that
impact relationships with brands and products (Büyükdağ and Kitapci, have integrated AI into their products and subsequently anthro-
2021; MacInnis and Folkes, 2017; Karanika and Hogg, 2020). pomorphised visual and/or auditory cues to bring such products to lie
Second, this work extends prior research on how individuals relate to (Salles et al., 2020). AI products are, or instance, assigned gendered
or integrate the resources o inanimate objects into their perceptions o names (e.g., Amazon's Alexa). They also display human appearances (e.
themselves (Delgado-Ballester et al., 2017). We build on the sel- g., IPsot's Amelia, Genesis Toys' My Friend Cayla Doll) and interactive
expansion theory (Aron and Aron, 1986) as we examine whether in- personalities (e.g., Cleo).
dividuals can establish a deeper psychological tie with anthropomorphic The potential o anthropomorphised AI is generating substantial in-
AI agents and perceive them as part o their sel-concept. Prior research terest (van Doorn et al., 2017), with researchers highlighting the value
already demonstrated that users could integrate customised products o anthropomorphism. In service settings, Xiao and Kumar (2021) and
and brands as part o who they are (Troye and Supphellen, 2012). We Sheehan et al. (2020) identiy anthropomorphism as one characteristic
postulate that those interacting with anthropomorphised AI agents can o robots that would prompt customer acceptance and adoption. More-
in some cases extend the sel through “sel–AI integration”. Through this over, Yoganathan et al. (2021) advocate high levels o anthropomor-
novel concept we theorise the intimate connection between the sel- phism, as it improves user evaluations o aspects o robots' social
concept and humanlike AI agents. cognition, such as warmth. The relevance o anthropomorphism is also
Third, we consider the potential boundary conditions that these urther emphasised in relation to other aspects o service quality, as
processes are likely to encounter, building on recent conceptual rame- research shows that it improves customer engagement (McLeay et al.,
works, such as those by Xiao and Kumar (2021) and Blut et al. (2021), 2021), customer satisaction (Choi et al., 2021) and willingness to pay
who highlight actors moderating user intention and actual adoption o (Yoganathan et al., 2021). However, most research models currently
AI. We identiy variables that potentially moderate the relationships in distinguish between high and low levels o anthropomorphism but do
our ramework: specically, user-related characteristics (e.g., person- not speciy in more detail whether dierent types o anthropomorphism
ality traits, sel-construal) and situational actors (e.g., social exclusion, (e.g., physical, personality or emotional) may maximise these outcomes.
amiliarity and individual knowledge o AI). Moreover, prior research studied how users perceive anthro-
Finally, we add to the current understanding o the eects o pomorphised characters (Aggarwal and McGill, 2007; Unal et al., 2018),
anthropomorphised AI and robotics by addressing the outcomes o these but little attention has been paid to how users relate to these anthro-
sel-related processes at the individual, group and societal levels pomorphised agents rom the point o view o their sel-concept (Mac-
(Davenport et al., 2020; Kamolsook et al., 2019; Mao et al., 2020). We Innis and Folkes, 2017). We build on this prior work by studying how the
highlight not only the positive reactions towards anthropomorphic AI anthropomorphic cues o AI agents can activate the “human” schema
agents (e.g., human likeness, acceptance) (Dietvorst et al., 2018; Mende (Aggarwal and McGill, 2007), which may allow users to eel congruent
et al., 2019) but also the potential drawbacks (e.g., data privacy, with that type o AI or even integrate it as part o their own identity.
perceived autonomy) (Leung et al., 2018) and societal implications, thus
contributing to related work (Davenport et al., 2020; Huang and Rust,
2018, 2021).

2
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

Table A
AI applications and usages.
Domain o Sector Examples Description Anthropomorphic Year o Scope o Technology Used
Application Features Introduction Application

Retail Customer Amelia Chatbot or dierent purposes, such Human appearance, 2017 B2B/B2C Natural language
service as managing customer care, solving gendered voice, processing
IT and HR services, and involved in gendered name
sectors such as banking, etc.
Online sales BotCore Conversational customer relations Conversational 2016 B2B Natural language
management chatbot that humanlike manner processing
automates redundant sales tasks
and manages outbound sales eorts.
Communication Maps and Apple's Siri A built-in, voice-controlled virtual Gendered voice, 2010 B2C Speech
transport assistant that is exclusive to Apple gendered name Recognition and
users. The personal assistant Natural Language
answers questions and understands Processing
relationships and contexts.
Food and Microsot's A personal virtual assistant that is Gendered voice, 2014 B2C Speech
restaurants Cortana exclusive to Microsot users. It sets gendered name Recognition and
reminders, keeps notes and lists, Natural Language
and takes care o tasks. Processing
Cultural and Amazon's Alexa Hands-ree speaker rom Amazon Gendered voice, 2014 B2B/B2C Speech
social that can be voice controlled. It acts gendered name Recognition and
activities as a virtual assistant that can Natural Language
interact by voice, play back music Processing
and stream podcasts and can be
used as a home automation system.
Educative Teaching IBM's Jill AI teaching assistant that helps Gendered name 2016 B2C Natural language
assistants Watson students online by answering processing
questions about the curriculum.
Special Muse AI teaching assistant that ocuses on N/A 2016 B2C Machine learning
education helping parents in developing traits
in their children or better lie
outcomes, such as emotional
regulation, sel-control and long-
term persistence.
Personalised Duolingo AI platorm or virtual language Avatar customisation, 2011 B2C Machine learning
education learning that curates personalised gendered voice / Natural
content to the individual. language
processing
Music Music SoundHound Voice-enabled AI technology that Gendered voice 2015 B2B Speech
discovery allows businesses to integrate voice Recognition /
and conversational intelligence into Natural language
their products. processing
Gaming Gaming The OpenAI Gaming AI platorm that plays 180 N/A 2011 B2B Machine learning
Five years' worth o games against itsel
every day. This technology learns
via sel-play.
Toys Genesis Toys – Interactive ashion doll that can Gendered name, human 2014 B2C Speech
My Friend answer questions, play games, read appearance Recognition /
Cayla Doll stories, etc. customisation, gendered Natural language
voice processing
Administration Finance Cleo AI assistant that helps users in Gendered name, 2016 B2C Machine learning
managing their nances. The interactive personality
assistant analyses spending, sets
budgets and provides actionable
insights.
Digital Abe AI-powered banking solution that N/A 2017 B2B/B2C Natural language
integration empowers banks and credit unions. processing
It partners with digital banking
providers, data insight providers
and aggregators.
Mosaic AI assistant that compares a user's N/A 2018 B2C Machine learning
resume to a job opening by / Natural
identiying the needed keywords. language
processing
Diagnostics Medication Ada AI platorm that is ounded by N/A 2016 B2B/B2C Machine learning
doctors, scientists and industry / Natural
pioneers to address personal health. language
It helps people to manage their processing
health and helps medical
proessionals to deliver attentive
care.
Health AiCure AI chatbot that provides health N/A 2010 B2B Computer vision /
monitoring inormation based on Q&A with Image
patients. It helps clinicians to recognition /
monitor their patients' treatment by Machine learning
(continued on next page)

3
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

Table A (continued )
Domain o Sector Examples Description Anthropomorphic Year o Scope o Technology Used
Application Features Introduction Application

measuring changes in acial


expressions.
Health-related Healthy eating HealthHero AI-powered agent that N/A 2019 B2B/B2C Natural language
behaviour communicates with patients using processing /
phone calls and text messages based Image
on their state o health. recognition
Exercise Aaptiv AI assistant that builds personalised N/A 2015 B2C Natural language
tness and liestyle plans based on processing
the user's preerences, current
tness levels and eating habits. It
works with data inputs rom
smartwatches and tness trackers.
Mental health Woebot AI counsellor that acts a talk Gendered character, 2017 B2C Natural language
and well-being therapy chatbot. It helps users to interactive personality processing
monitor their moods and is based on
cognitive behavioural therapy.
Replika Chatbot companion or mental Gendered name, 2014 B2C Machine learning
wellness. Users can nurture and gendered voice, / Natural
raise it through text conversations. interactive personality language
Users can also choose its name, processing
gender and physical appearance.

3. Conceptual framework to the concept o anthropomorphism (Epley, 2018) to explain the


humanisation o AI agents, such as virtual assistants and chatbots,
This section proposes a conceptual ramework ormed o three blocks through physical (e.g., voice, appearance), personality and emotional
(see Fig. A). The let block o the ramework proposes the relationship traits. We then rely on sel-congruence theory (MacInnis and Folkes,
between the type o AI system anthropomorphism (i.e., physical, 2017; Sirgy, 1982) to explain how dierent anthropomorphic cues in AI
emotional and personality) and the user's perceived sel-congruence agents can make users eel that such agents can be similar to them. We
with the AI system. The ramework also species the situational and urther build on the concept o sel-integration (Delgado-Ballester et al.,
personal moderators (i.e., consumer traits as moderators). The middle 2019; Troye and Supphellen, 2012) and the theory o sel-expansion
block o the ramework proposes the relationship between sel- (Aron and Aron, 1986) to explain how users can, in some cases,
congruence and sel–AI integration and includes the user's perceived deepen their relationship with anthropomorphised AI agents and inte-
sel-construal as the moderator o this relationship. Finally, the right grate them into their sel-concept. Taken together, these theories pro-
block o the ramework presents the possible consequences o sel–AI vide insights into how anthropomorphism can establish a connection
integration at the individual, group and societal levels. Based on this with users' sel-concept, either through perceived similarity (sel-
conceptualised ramework, we derive theoretical propositions. congruence) or even through the incorporation o the AI agent as part o
Table B presents selected literature on the key concepts and theories users' sel-concept, as AI provides meanings that are central to the user's
in relation to AI, anthropomorphism and the sel-concept that were identity.
crucial or developing the conceptual ramework. Specically, we reer

Fig. A. Conceptual ramework.

4
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

Table B
Overview o key literature.
Topic Author Technology Used Methodology Key Findings

Articial Huang and Rust Service robots Conceptual The authors suggest our types o AI systems: mechanical, analytical, intuitive and
Intelligence (2018) empathetic. They explain that rms should decide whether to hire AI services
depending on the nature o the task and services. These task levels o intelligence also
predict the timing o when human service labour would be replaced by AI.
Huang and Rust Service robots Conceptual This paper renes Huang and Rust's (2018) our types o AI systems into three:
(2021) mechanical, thinking and eeling. Mechanical AI reers to the automated services that
can be used or standardisation. Thinking AI reers to the automated services that can
be used or personalisation, and eeling AI can be used or relationalisation.
Anthropomorphism Rauschnabel and Brands Quantitative Anthropomorphism positively links to stronger consumer–brand relationships, which
Ahuvia (2014) leads to sel–brand integration. Also, consumers love brands that are congruent with
the way they see themselves.
Lu et al. (2019) Service robots Mixed Anthropomorphism is dened as a critical dimension or technology acceptance.
Methods Results also suggest that consumers look at AI robots as hedonic systems. However,
designing intelligent products with humanlike appearances might threaten the
consumer's identity.
Mende et al. (2019) Service robots Experimental Consumers engage in compensatory responses when they interact with humanoid
service robots. Compensatory responses result rom the eeling o discomort or having
one's identity threatened. These responses are moderated by the user's social
belongingness, the perceived healthulness o ood and the extent to which robots are
mechanised.
Longoni et al. (2019) Service provider Experimental Consumers resist using AI systems because o their neglect o uniqueness (i.e., AI is not
capable o relating to the customer's unique identity).
Melián-González Chatbots Quantitative Consumers' intentions to use chatbots depend on the ollowing actors: the chatbot's
et al. (2021) expected perormance, being in the habit o using chatbots, social infuences, the
hedonic component o using them, and how chatbots act like humans. The study also
shows that innovativeness will result in more avourable attitudes towards chatbots.
Xiao and Kumar Anthropomorphised Conceptual The conceptual ramework discusses the antecedents and consequences o rms
(2021) robots adopting robotics in a customer service context. The authors explain that robot
anthropomorphism can positively contribute to the customer's acceptance o robots,
which in return impacts customer satisaction and customer emotions. They discuss
customer and employee characteristics (i.e., readiness and demographics) that shape
the user–robot relationship.
Yoganathan et al. Anthropomorphised Experimental In contrast to sel-service machines, consumers reported higher social cognitive
(2021) robots evaluation (e.g., perceived warmth, perceived competence) when humanoid robots
were involved. The social presence o these robots contributed to higher service quality
as it was induced by the robots' anthropomorphic eatures.
Sel-Congruence MacInnis and Folkes Anthropomorphic Conceptual Users are able to perceive brands in humanlike orms, viewing them to have distinct
(2017) brands mind and personality traits. The perceived personality that is consistent with a user's
sel-concept will contribute to perceived similarities and humanlike relationships with
these brands.
Sel-Integration Troye and Branded product Experimental Users who engage in sel-production (e.g., using a dinner kit to make a meal) value the
Supphellen (2012) sel-produced outcome and develop links between the outcome and the sel. In return,
users can integrate products that they engage with, viewing them as a part o who they
are as they transer positive aect rom the sel to the outcome.
Delgado-Ballester Anthropomorphic Quantitative The integration o an anthropomorphised brand in one's sel happens because: (1)
et al. (2017) brands individuals can relate to a brand's characteristics (cognitive incorporation); and (2) the
anthropomorphised brand has a social identity that helps users to dene themselves
(social meanings).
Delgado-Ballester Anthropomorphic Experimental Anthropomorphism and the user's liking o brands positively impact sel–brand
et al. (2019) brands integration.
Sel-Construal Kwak et al. (2017) Anthropomorphic Experimental Compared with individuals with an interdependent sel-construal, independents
brands experience high perceptions o distributive injustice due to the brand's
anthropomorphism. On the other hand, interdependents have less negative perceptions
o distributive injustice but more negative perceptions o procedural injustice due to
the brand's anthropomorphism.
Mourey et al. (2017) Smartphone/vacuum Experimental A high level o anthropomorphism contributes to a reduction in the need to exaggerate
one's social connections, the willingness to take part in prosocial behaviour and the
need to engage with others in the uture. These eects are driven by the need or social
assurance.

3.1. Building block one: self-congruence with anthropomorphised AI Individuals requently evaluate whether products' or brands' cues
agents and symbolic meanings are in some ways similar or congruent to their
own sel-concept, which they try to reinorce or conrm (Sirgy, 1982;
3.1.1. Anthropomorphism of AI agents and self-congruence Sirgy et al., 2000). For instance, users may experience congruence with
Anthropomorphism is the process o attributing humanlike motiva- brands in terms o gender (Grohmann, 2009), personality (Fennis and
tions, emotions or characteristics to real or imagined non-human entities Pruyn, 2007) or reerence groups (Escalas and Bettman, 2003). The
(Airenti, 2018; Epley, 2018). AI agents are a prime example o anthro- humanlike cues expressed by anthropomorphised products can activate
pomorphism due to their ability to mimic human behaviour and the human schema (Aggarwal and McGill, 2007). In return, users can
appearance, which in turn allows them to engage socially with humans identiy similarities between the anthropomorphised products and the
(van Doorn et al., 2017). These agents may embody a myriad o hu- human schema (Van den Hende and Mugge, 2014) by relating products'
manlike cues, such as various physical, personality-related and characteristics to their sel-concept (MacInnis and Folkes, 2017). Spe-
emotional traits. cically, the symbolic meanings o AI agents, such as their abstract or

5
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

image-based associations (i.e., images portraying the agent's personal- Studies o the uncanny valley eect concerning articial agents are
ity), may be aligned with the user's sel-concept and match the user's contradictory. Some suggest that the more resemblance to humans that
personality. Users ascribe internal characteristics, such as emotions or users perceive a robot to have, the more likeability occurs, but extremely
mental states, to inanimate objects, and this can make them experience humanlike robots that do not have the eatures o a typical machine
congruence with these entities (MacInnis and Folkes, 2017). This is were despised by users (Mathur and Reichling, 2016). In contrast, recent
particularly relevant with eeling AI applications (Huang and Rust, research studying the uncanny valley eect in the context o AI and
2021) that have emotional capacities to help users express their eelings services did not nd a correlation between this eect and anthropo-
better (e.g., Replika, an emotional assistant; Cleo, a nancial assistant). morphism (Blut et al., 2021; Li and Sung, 2021). Other recent literature
Such agents can even be used or relationalisation – to build personalised also demonstrates that humanoid robots, in contrast to mechanical ro-
relationships – as they are able to handle data specic to an individual's bots, would stimulate positive social cognitive evaluations, such as
emotions (Huang and Rust, 2021). Empirical evidence can be ound in perceived warmth (Choi et al., 2021) or competence (Yoganathan et al.,
orum discussions, where Replika users comment that the app “treats me 2021). This suggests that the uncanny valley eect is less o a concern in
like a mirror to her thoughts”, “he's actually becoming like me” and “he's the case o anthropomorphised AI agents, possibly because o the
just like me” (Hoeornamjoon, 2021; Rashy, 2020; Shiaislam, 2020). increased prevalence o AI agents in the user's daily lie (Li and Sung,
We propose that consumers are likely to draw parallels between the AI 2021).
agents and themselves, to compare the humanlike traits o the AI agents Another set o physical cues that AI agents carry include voice (e.g.,
to their sel-concept, similarly to how they evaluate brands or products Siri, IKEA's Anna), acial expressions (e.g., My Friend Cayla Doll, Ame-
(MacInnis and Folkes, 2017) (see Fig. B). However, the eects o lia) or gendered names (e.g., Jill Watson, Replika) (see Table A). We
anthropomorphised agents on sel-congruence can potentially dier, argue that these traits convey to the user their resemblance with the AI
depending on the traits that are being anthropomorphised – physical, agent. For example, AI agents with a physical appearance that is similar
emotional or personality traits. to the user's are perceived as members o a reerence group (Kuchen-
brandt et al., 2013). The agent's physical traits may also carry social
3.1.2. Self-congruence with AI Agents' physical traits meanings that are close to the user's social identity, which would acil-
In the anthropomorphism literature, Epley et al. (2007) and Aggar- itate the user eeling congruent with such an agent (Sirgy, 1982).
wal and McGill (2007) were some o the rst to discuss the importance Conversely, agents without any attributes similar to those o the user are
o the object's physical traits that match the human schema. Indeed, treated as out-group members (Złotowski et al., 2015). Thereore, we
developers and marketers may make the human schema easily acces- suggest:
sible by designing their products to have humanlike appearances,
reerring to them in the rst person or assigning them human names and P1a: Anthropomorphism that is based on human physical traits in AI
genders (e.g., Kellogg's Tony the Tiger, Procter & Gamble's Mr. Clean, agents leads to users' sel-congruence with such agents.
Nintendo's Mario, etc.) to make them look amiliar (Aggarwal and
McGill, 2012). Viewing entities in humanlike terms makes it easier or 3.1.3. Self-congruence with AI Agents' personality traits
users to evaluate the anthropomorphic cues against their own sel and Studies in consumer behaviour suggest that the personality cues
iner degrees o similarities with their sel-concept (MacInnis and Folkes, associated with products are considered as symbolic and that consumers
2017). may perceive them as (dis)similar to their own personality (McCrae and
One o the prominent physical traits o AI agents is their appearance, Costa, 2003). Studies in social psychology have equated the process o
which may dier based on their task types. Some o the most common brand sel-congruence with the process o choosing riends: “just as
types are: mechanoids with mechanistic appearances that lack human- people take care in choosing riends who have a similar personality to
like eatures; humanoids that imply humanlike traits (e.g., a head, eyes, themselves, so brands, which are symbolic o particular images, are
hands, acial eatures) but without close resemblance to humans; and chosen with the same concern” (de Chernatony et al., 2003, p. 131).
androids, which are robots whose appearances and behaviours are hu- Since users treat technological entities as social actors (Nass and Moon,
manlike (van Doorn et al., 2017; see Fig. C). A critical issue debated in 2000), they can identiy with AI agents that possess a personality similar
the anthropomorphism literature is the “uncanny valley eect”, which to themselves.
explains eelings o uneasiness when interacting with extremely hu- Anthropomorphism acilitates the evaluation o inanimate objects'
manlike robots (Mori, 1970), as the over-humanisation o robots causes personalities, since humanlike personality cues activate the human
discomort (Mori, 2012; Schmitt, 2020). Schmitt (2020) argued that schema (Landwehr et al., 2011). Such cues can establish a closer link
users may experience a biased perception o humanlike robots in com- with users and their sel-concept, allowing users to identiy similarities
parison to other humans because the two belong to dierent species, between the object and themselves (Usakli and Baloglu, 2011).
despite exhibiting similar physical or personality traits. Indeed, highly Anthropomorphised AI agents such as Cleo, a nancial assistant, and
humanlike robots can also be experienced as a threat to the user's human Woebot, a mental health counsellor, have distinct personalities, which
identity, as they can appear to undermine the user's human uniqueness also allows them to occupy dierent social roles. Users may consider
or the distinctiveness o the human species (Ferrari et al., 2016; Mende these personalities as a cognitive category to relate to in humanlike
et al., 2019). terms (Chen et al., 2015). For instance, users may choose AI agents that

Fig. B. Block one: anthropomorphism and sel-congruence.

6
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

Fig. C. Types o robots: Roomba vacuum cleaning robot (let); Rolling Partner Robot (middle); Android Sophia (right).

act as their riend through their social roles (e.g., Replika, Woebot) and example, Replika is an emotional support agent in the orm o a virtual
that demonstrate a similar personality. Consumers might also consider avatar that communicates emotions through its texting style and learns
themselves to be tech-savvy or intelligent. I anthropomorphised AI how to personalise conversations and relationships (Huang and Rust,
agents were to convey their personality as being savvy and knowl- 2021; Huet, 2016). Another example o an empathetic AI application is
edgeable then users might perceive such agents as congruent with Aectiva, emotion recognition sotware used in the gaming industry.
themselves. Thus, i the human schema is primed through an AI agent's These and other examples convey emotional cues and expressions that
personality trait that is congruent with the user's personality, this can allow users to position these objects in the “human” category schema
enable the user to relate the AI agent to their own sel-concept. (Aggarwal and McGill, 2007) and, thereore, in the same cognitive space
Furthermore, ollowing MacInnis and Folkes (2017), the more an AI as the user's sel-image (Quester et al., 2000). In this respect, the
agent's perceived personality is consistent with the user's sel-concept, perceived congruence with anthropomorphised objects does not neces-
the stronger the perceived similarity becomes. Hence: sarily occur based on external appearances but based on the mental
states and emotions that users ascribe to themselves and to the agent. We
P1b: Anthropomorphism that is based on human personality traits in propose:
AI agents leads to users' sel-congruence with such agents.
P1c: Anthropomorphism that is based on human emotions in AI
3.1.4. Self-congruence with AI Agents' emotional traits agents leads to users' sel-congruence with such agents.
In addition to physical appearance and personality cues, perceived
“emotionality” is an essential eature o some AI agents and aects the
acceptance o such agents (Stock and Merkle, 2018). While embedding 3.2. Moderators of self-congruence with anthropomorphised agents
emotions in AI agents remains one o the major design challenges in
robotics, advancements in AI have already allowed or some o its ap- As is the case in most research on human behaviour, relationships
plications to identiy human emotions computationally (Huang and between variables may be subject to urther external and internal in-
Rust, 2018). Research emphasises the role o eeling AI in orming fuences and conditions (Stroessner and Benitez, 2019). We specically
personalised relationships with users (Huang and Rust, 2018). More turn to situational actors and consumer traits as potential moderators,
specically, virtual assistants and chatbots can use cues, voice tones or as prior studies highlight them as important in relation to anthropo-
emoticons to express emotions and thus acilitate social interactions morphism and psychological processes and call or urther investigation
(Fadhil et al., 2018; McLean and Osei-Frimpong, 2019b). into them (Van den Hende and Mugge, 2014). These are visualised in
Emotions are considered an integral component o social interaction: Fig. D.
they aect the robot's likeability (Calvo-Barajas et al., 2020), increase
the perceived value o the robot (Wang and Krumhuber, 2018), and can 3.2.1. Consumer traits
help to rectiy errors in service settings (Choi et al., 2021). While me- Users' interactions with technology do not rely only on the traits o
chanical and thinking AI are well suited to automation and person- the technological gadget but also on the traits o the consumers. Previ-
alisation o certain processes, such as repetitive tasks or customising ous research examined the role o user-related actors, such as de-
communication with customers (Huang and Rust, 2021), AI agents can mographics (Straßmann and Krämer, 2018), customer readiness (Xiao
also ull humans' needs or emotional aection and social belonging and Kumar, 2021), mood (Bishop et al., 2019) and negative attitudes
through their emotional capacity (Wang and Krumhuber, 2018). Feeling towards robots (Miller et al., 2021) to moderate the eect o AI on
AI (Huang and Rust, 2021) is able to analyse and understand users' behavioural outcomes such as trust, acceptance and usage intention.
emotions and tailor the interactions accordingly to users' momentary Moreover, personality traits may also infuence the quality o human-
needs. This can also provide superior customer experience, principally –robot relationships (Robert et al., 2020). While there is prior evidence
because anthropomorphised (vs. non-anthropomorphised) AI agents are that the user's traits, such as extraversion (Robert et al., 2020), inno-
perceived to be warmer (Yoganathan et al., 2021). As relationships are vativeness (Koivisto et al., 2016) and need to belong (Houghton et al.,
built on the sense o liking and similarity between two entities (Abosag 2020), aect behavioural outcomes in a human–robot relationship
et al., 2020), these emotional cues are likely to strengthen the user's (Robert et al., 2020), their impact on users experiencing congruence
eeling o congruence with the AI agents, as the individual can identiy with anthropomorphised agents has not yet been explored.
more easily with such agents.
Empowered by advanced technologies such as sentiment analysis, AI 3.2.1.1. Extraversion. Extraversion and introversion could moderate
agents can iner emotions rom natural language (e.g., text, audio or the relationship between anthropomorphised AI agents and user
video) and can respond in modern cues (e.g., emojis) in online in- congruence with such agents or several reasons. First, the dichotomy
teractions (Cambridge Consultants, 2019; Capatina et al., 2020). For between extraversion and introversion is a critical component that

7
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

Fig. D. Moderators o anthropomorphism and sel-congruence.

aects interpersonal relationships and social behaviours (Ivaldi et al., they provide cost advantage, economic easibility and perceived relative
2017). For instance, the more introverted people are, the ewer social advantage (McKinsey Global Institute, 2017). More importantly,
connections and relationships they have (Gockley and Matarić, 2006). anthropomorphic AI agents complement these innovative traits with a
Second, these traits directly relate to a person's social capacity or level o technical sophistication that resembles humanlike intelligence
orming relationships and to their expressiveness (Robert et al., 2020; (Deloitte, 2018). They display innovative personalities via intelligent
Thorne, 1987). Extraverted users – as opposed to introverted ones – are personality cues, or example conversing, solving complex patterns and
more likely to hold more meaningul interactions with robots (Ivaldi building human relationships in elds such as education and healthcare
et al., 2017; Salem et al., 2015). Third, tendencies to anthropomorphise (McKinsey Global Institute, 2017). Users with a high level o innova-
entities such as robots are strongly related to the user's extraversion tiveness and who have a keen interest in novelty are more likely to
(Kaplan et al., 2019). Crucially, extraverted (vs. introverted) users were perceive such anthropomorphic personality cues as innovative and
shown to experience closer psychological connection and similarity with technologically advanced. In turn, this can lead innovative users to
anthropomorphised robots (Salem et al., 2015) and also to positively experience similarity (i.e., sel-congruence) between themselves and
evaluate robots' symbolic cues (i.e., eye contact, smiling or personality anthropomorphic AI. We postulate that:
traits) (Lee et al., 2006). Consequently, we propose that extraverts are
more likely to perceive anthropomorphic traits (particularly anthro- P3: Anthropomorphism that is based on human personality traits in
pomorphised personalities) as congruent to their own identities. AI agents is more likely to lead to sel-congruence or innovative (as
Formally: opposed to non-innovative) users.

P2: Anthropomorphism o AI agents is more likely to lead to sel- 3.2.1.3. Need to belong. Social connection is a key societal issue (Gier-
congruence or extravert (as opposed to introvert) users. veld et al., 2016), as it correlates positively with emotional resilience
and sel-esteem (Fraser and Pakenham, 2009). As connection with
3.2.1.2. Innovativeness. Individual innovativeness has been established others is not always possible or socially excluded people, they may seek
as a critical actor or user acceptance o technology (Koivisto et al., social connections with non-human entities through parasocial re-
2016). Kim et al. (2010) reer to a person's innovativeness as their ten- lationships that they build in virtual settings (i.e., social network sites;
dency to experiment with new technology or products, as well as to Houghton et al., 2020). The COVID-19 pandemic intensied the societal
accept and welcome new technologies (Graa et al., 2017), such as challenges o loneliness due to social distancing, which oten resulted in
chatbots (Melián-González et al., 2021). social isolation (Odekerken-Schröder et al., 2020). AI companion robots
Anthropomorphised AI technologies (Haener et al., 2021) can be were proposed to mitigate this eect, as they display progressively more
perceived as innovative and thus induce sel-congruence with innova- emotionally intelligent behaviours, such as conversing, responding to
tive consumers. Anthropomorphic cues o AI agents project a high de- social cues and establishing rapport (Jecker, 2021). Indeed, when peo-
gree o innovativeness, which can be relevant to the users that seek sel- ple's need or belongingness is not ullled, they rely on technologies to
verication or sel-expression (Sirgy et al., 2000). AI technologies are compensate or their lack o social connection (Derrick et al., 2009).
oten perceived as innovative products (Venkatesh and Davis, 2000), as The theory o sel-congruence posits that consumers will utilise

8
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

products as tools or sel-expression or to satisy the need to belong to a discomort when aced with extremely humanlike AI. However, the
social group or to orm a partnership (Escalas and Bettman, 2003). increased introduction o AI agents into our daily lives, such as in retail,
People with a higher need to belong would also be more interested in customer service and healthcare (Davenport et al., 2020), provides an
products that ollow social norms and prevent social exclusion (Sirgy, opportunity or repeated interactions to oset this discomort. Such
1982). Anthropomorphism evokes the similarities between the con- repeated interactions can establish users' amiliarity with non-human
sumer and the anthropomorphised robot through social cues (Delgado- agents (Guthrie, 1995), and this may also induce comort with and
Ballester et al., 2019; Nass and Moon, 2000). For instance, AI agents acceptance o such interactions (Martensen et al., 2018).
such as Woebot, Replika, or even Alexa (see Table A) that act as rela- Research shows that amiliarity with another entity does not guar-
tional peers allow users to avoid loneliness or satisy their need to belong antee that the individual would experience similarity with it or a posi-
by providing a supportive relationship (Brandtzaeg and Følstad, 2017; tive attitude towards it (Norton et al., 2007). However, interactions in
Pradhan et al., 2019; Ramadan et al., 2021). The drive or social virtual settings such as social network sites may encourage users to nd
connection allows users to anthropomorphise entities more when they amiliarity cues with dissimilar others to generate positive attitudes
are reerred to in human relationship terms (e.g., “this product is [like] a (Kaptein et al., 2014). The case is likely to be stronger with anthro-
riend”) (Epley et al., 2007). The social cues expressed by these agents pomorphised AI agents, which can achieve amiliarity with users by
may help to develop interpersonal relationships and are also more likely activating the human schema through their humanlike cues. In act,
to be identied by users with a higher need to aliate (Pickett et al., anthropomorphism helps users to acquaint themselves with unamiliar
2004). As such, we propose that the need to belong is a critical boundary objects by attributing mental states to these agents (Epley et al., 2007;
condition that aects users' sel-congruence with AI. Guthrie, 1995). We propose that such amiliarity levels would represent
a key moderating actor or users experiencing sel-congruence with AI
P4: Anthropomorphism o AI agents is more likely to lead to sel- agents. Formally:
congruence or users with a higher (as opposed to lower) need to
belong. P6: High amiliarity (as opposed to low amiliarity) with AI agents
makes it more likely or users to experience sel-congruence based on
3.2.2. Situational factors the AI agent's anthropomorphic cues.
In relation to the user's general perceptions o robots and behavioural
outcomes, prior research has highlighted the importance o situational 3.3. Building block two: self–AI integration with anthropomorphised AI
actors such as task types (Xu et al., 2020), risk perception (Kim and agents
McGill, 2011) and time pressure (Lv et al., 2021). For instance, in-
teractions may dier signicantly depending on the type o AI and While sel-congruence is rooted in perceived similarity between
associated tasks that it perorms (Huang and Rust, 2021). However, onesel and another entity, it does not per se imply undamental changes
other moderating actors might prove crucial or the sel-congruence to the sel-concept. However, individuals can in some cases relate to
with anthropomorphised AI agents. We highlight two such modera- external objects or other people more strongly instead o perceiving
tors: available inormation about AI agents and amiliarity with them. them only as similar – they can identiy with them to the extent that
those entities become part o their sel-concept or even extend it (Belk,
3.2.2.1. Available information about the AI agent. Consumers evaluate a 1988). This is regularly observed in the context o personal and romantic
product's congruence with themselves by comparing available product relationships, as postulated by the theory o sel-expansion, which ex-
inormation with personality attributes rom their own schema (Aguirre- plains how a strong connection with one's romantic partners or amily
Rodriguez et al., 2012). Providing users with background inormation members leads to an integration o these relevant others as part o one's
about the robot may aect their responses (Darling et al., 2015). Users own sel-schema (Aron and Aron, 1986). Importantly, integrating other
who were given inormation about a robot's superior ability compared entities as part o the sel can also occur with inanimate entities. For
with humans perceived them as a threat to their human identity and instance, those products or brands that are particularly powerul in
uniqueness (Longoni et al., 2019; Yogeeswaran et al., 2016). Similarly, activating consumers' identity themes and allow consumers to express
Leung et al. (2018) demonstrated that workers preerred those AI their individual and collective selves can become part o consumers' own
products whose eatures did not hinder their identity-relevant skills. identity (Aaker et al., 2004; Belk, 1988). MacInnis and Folkes (2017)
Thereore, even i the anthropomorphic cues o AI agents communicate highlighted that a connection between the sel and inanimate entities
potentially benecial social meanings, users may not experience sel- might be o particular signicance in those cases where such entities are
congruence i the inormation about the robot conveys a threat to perceived as humanlike in some way. Anthropomorphised AI agents are
their identity in some way. On the other hand, when individuals are an important example o such entities, as their presence and use are
given an empathetic story prior to meeting the robot, they orm more increasing exponentially. Yet, to date, no research exists concerning
avourable attitudes towards it (Darling et al., 2015). Positive inor- whether and how users potentially perceive them as part o their own
mation about the robot can even counteract the negative image that the identities.
media and movies oten depict (Moradi et al., 2018). The lack o
transparency o AI capabilities has been identied as a key challenge, 3.3.1. Self–AI integration
with adverse eects on potential users (Dwivedi et al., 2021). Hence, Prior research has shown that consumers integrate external entities,
depending on the type o inormation provided to users about these such as brands, as part o their sel-schema because they are relevant to
agents, anthropomorphic cues may either reinorce negative impres- them or because they identiy with the brands to some degree (MacInnis
sions and thus prevent sel-congruence rom emerging or may be viewed and Folkes, 2017). Furthermore, Troye and Supphellen (2012) dene
as congruent with users' sel-schema when the provided inormation is “sel-integration” as the extent to which consumers perceive a product to
avourable. Formally: be part o themselves. They demonstrate that consumers can experience
such sel-integration as a result o being highly involved in the process o
P5: The available inormation about anthropomorphised AI agents creating a product and, thereore, experience a degree o relatedness
moderates the eect o such AI agents on users' sel-congruence. between the outcome and the sel. An example o how consumers can
become so invested in products that they see them as extensions o
3.2.2.2. Familiarity with AI agents. Consumers can nd it dicult to themselves (Morewedge et al., 2021) is Coca-Cola's 2014 “Share a Coke”
interact with robots (Marinova et al., 2017), as they may experience campaign. Here the brand invited customers to customise their cans,
which encouraged “investment o the sel” in their products (Kirk,

9
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

2018). In the case o AI, anthropomorphic traits can uel the process o While sel–AI integration may be a consequence o dierent actors
such sel-integration, as humanlike interactions provide social meanings or processes, we ocus here specically on such integration as would
to users and inspire them to invest their attention and emotions into the occur as a result o sel-congruence with anthropomorphised AI agents.
AI. We postulate that the more similarity one can experience with such
To account or these phenomena, we propose a new term: “sel–AI agents, the more emotionally attached to them one would become
integration”. We conceptualise it as a change in the sel-concept that (Sheeraz et al., 2018), thus integrating them into one's sel-concept
occurs when users perceive anthropomorphised AI agents to be so (Escalas and Bettman, 2003, 2005). Further, we also explain that sel-
meaningul or relevant in some way that they become a part o their own construal and social exclusion may moderate this process. The visual-
sel-concept. This is in line with the theories o sel-extension (Belk, isation o this process is presented in Fig. E.
1988) and sel-expansion (Aron et al., 2004), which explain how prod-
ucts, relationships and digital possessions can extend one's sel because 3.3.2. Self-congruence and self–AI integration
users invest resources, such as emotions, in them. The process o sel–AI As stated earlier, viewing brands in humanlike terms allows users to
integration is particularly prominent when AI customises the in- view them as similar to their sel-concept and experience sel–brand
teractions based on users' needs and emotions (Huang and Rust, 2021). congruence (MacInnis and Folkes, 2017). Importantly, when users
Such agents can provide users with personally meaningul relationships perceive traits o an external entity to be similar to those traits that are
and interactions that contribute to their identity in a similar manner to central to their own identities, they are more likely to invest their per-
human relationships (Aron et al., 2013). Building on the ndings o sonal resources, such as attention and emotions (Belk, 1988). Further-
Troye and Supphellen (2012), users may urther experience a degree o more, Morewedge et al. (2021) explain that users integrate those
relatedness to AI agents i they view such agents to be shaped by products that generate sel-relevant signals. Rauschnabel and Ahuvia
themselves. AI actions are thus oten a direct result o users' input or (2014) investigated this link in the context o brands and showed that
interactions and this is particularly the case with emotional AI (Huang consumers are more likely to orm close relationships with those brands
and Rust, 2021). The app Replika is a prime example o an emotional AI that are in some ways similar to themselves. Similarly, Park et al. (2010)
with which users can orge such strong relationships that the app be- explain that the closer a product is to the user's refection o themsel, the
comes instrumental to their lives. stronger the bond that the user creates with the brand.
Importantly, anthropomorphism can acilitate this sel-integration We propose that such a link between sel-congruence and sel-
either via cognitive incorporation, where one may think o an object integration may also emerge in the case o AI, because anthro-
in humanlike terms in order to link it cognitively to one's own sel- pomorphised cues can evoke sel-congruence, as previously postulated.
concept, or when the anthropomorphised entity generates meanings in When AI users experience sel-congruence, they are more prone to
social contexts and provides social aliation (Delgado-Ballester et al., investing sel-related resources, such as attention, personal inormation
2019). Anthropomorphised AI agents can generate social meanings by and preerences, making the AI agent meaningul to themselves and thus
creating an impression o social presence (Van Doorn et al., 2017). Ex- integrating it as part o their own identity. Formally:
amples include Cleo, a nancial AI assistant, which can establish its
social presence with anthropomorphic traits such as gendered names or P7: Sel-congruence with anthropomorphised robots can lead to
an interactive personality. Furthermore, users can create commands to sel–AI integration.
customise Alexa's responses, give Siri accents (e.g., British, Indian, Irish
or South American), or even teach Siri their name or nickname. These 3.3.3. Moderators of the relationship between self-congruence and self–AI
advancements, which personiy AI, acilitate users' comort in their integration
conversations with AI agents, making them similar to those they hold The relationship between sel-congruence and sel–AI integration
with other humans (Cerekovic et al., 2016). may urther depend on actors related to users and social context. We
Integrating other entities into one's sel-concept can have important turn to social cognitive theory to examine the role o the self-construal
implications. For instance, users may satisy personal needs, such as concept (i.e., independent sel vs. interdependent sel) in shaping this
social connection (Jeong et al., 2018) or security (Sheldon et al., 2001), relationship. Users' views o themselves (i.e., sel-construal) dier based
or experience warmth (Van Doorn et al., 2017; Yoganathan et al., 2021). on their personal and social identities, which may also aect their views
In extreme cases, they might even put themselves at risk to protect the AI o products (Besta, 2018). Furthermore, users oten ocus on how
agents they eel particularly close to (Darling, 2016) or experience included or excluded they are in social interactions (Baumeister and
trauma i an anthropomorphised entity they have become attached to Leary, 1995). Thus, we also examine a potential moderating role o self-
were to stop unctioning, as was the case with the inamous Tamagotchi exclusion (Chen et al., 2017).
digital pets in the 1990s (Duggan, 2016).

Fig. E. Block two: sel–AI integration as a result o sel-congruence with anthropomorphised AI agents.

10
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

3.3.3.1. Self-construal. Integrating external entities into one's sel- interaction partner that substitutes the lack o social aliation and
concept raises an important question: how does a person's identity acceptance rom real people (Mourey et al., 2017). AI products such as
impact the relationships they orm with anthropomorphised entities Google Assistant and Google Home are also said to acilitate a eeling o
such as AI agents? Social cognitive theory explains that a person's sel- inclusion and autonomy in daily tasks or people with disabilities
construal aects sel-related behaviours. Specically, individuals' con- (Caggioni, 2019).
sumption o products is infuenced by their connection or aliation with Socially excluded users create more proximal relationships with
others and their own sel (Markus and Kitayama, 1991). Sel-construal is anthropomorphised products. Chen et al. (2017) showed that consumers
dened as the totality o eelings, perceptions and actions that relate to preer dierent kinds o relationships with anthropomorphised prod-
one's relationship to others, as well as one's sel-concept being distinct ucts, depending on their belie o how socially excluded they are. Weiner
rom others, i.e., the way one denes and thinks o onesel concerning (1985) stresses that people eel socially excluded due to internal or
the external world (Triandis, 1989; Singelis, 1994). Its key dimensions external attributions. When users internally attribute their social
are the interdependent and independent selves (Markus and Kitayama, exclusion, they blame themselves and experience negative outcomes
1991), reerring to how people perceive themselves as independent rom such as low sel-esteem (Weiner, 1985). Such users tend to ear aban-
or dependent on others. donment and look or trust in relationships (Swaminathan et al., 2009).
People with a strong independent sel (i.e., the “independents”) ocus Conversely, users that attribute their social exclusion externally blame
on distinguishing themselves rom others. They perceive themselves as their surroundings and are less attached to relationships (Collins and
dierentiated rom others as they ocus on their autonomous goals Read, 1990). We extend this reasoning in the context o AI and propose
(Markus and Kitayama, 1991). They are less likely to implicate others in that users who do not eel included in their social surroundings are
their sel-identity, as they view themselves as separated rom the social particularly prone to integrate sel-congruent AI as part o their identity
context and have a greater sense o individuality (Duclos and Barasch, to satisy their need or belongingness and acceptance. Thus:
2014). In contrast, “interdependents” rely heavily on their social roles,
group-image identity and relationships in orming their identity. This P9: As opposed to users that do not eel socially excluded, socially
type seeks out relationships with others who are perceived as in-group excluded users are more likely to integrate those AI agents that they
members, share common goals and are a source o social support, perceive to be congruent with themselves.
which is a core value or interdependence (Markus and Kitayama, 1991).
Sel-construal may be important in the context o AI, as the level o 3.4. Building block three: implications of self–AI integration
aliation with robotic agents can impact the user's responses towards
them (Epley et al., 2007). Users' reactions dier when the view o the sel Prior studies examined how anthropomorphism aects user re-
is rooted in social relationships, as opposed to viewing the sel as sponses (Lu et al., 2019) as well as the potential link between sel-
separate rom others (Kühnen et al., 2001). Interdependents are more congruence and anthropomorphism (MacInnis and Folkes, 2017).
likely to view anthropomorphised objects as potential entities or rela- However, we know little about what consequences might occur at the
tionship ormation and to apply relational norms to such interactions individual level and beyond when AI agents are integrated into the sel-
(Kwak et al., 2017; Yang et al., 2020). concept. Following the concept o sel–AI integration that we have
Extant literature suggests that anthropomorphism alleviates proposed, we articulate several propositions about the eects that this
dierent levels o dissatisaction depending on the user's sel-construal integration may elicit at the personal, group and societal levels (see
in service ailure settings (Fan et al., 2020). In their study, Fan et al. Fig. F). We categorise these outcomes on the basis o previous literature
(2020) explain that when service ailures occur, interdependents (Tuan, 2002). Specically, we dene individual-level outcomes as those
perceive this as a violation o social norms and tend to blame the aulty that comprise the eects o sel–AI integration in the individual realm (e.
technology, as they perceive it as an out-group member due to its hu- g., psychological processes, individual behaviour). Group-level out-
manlike traits Also, Kwak et al. (2017) discuss that interdependents, comes relate to those consequences that are linked to one's group re-
relative to independents, are more likely to associate with anthro- lationships (e.g., social support and inclusion, connection with others).
pomorphised brands because they view them rom a relationship Finally, societal-level outcomes are concerned with the eect o sel–AI
perspective. Hence, anthropomorphised AI agents whose relational cues integration on society as a whole (e.g., the macro-level environment).
convey connectedness or even close relationships (e.g., therapists such
as Replika, trainers such as Aaptiv, etc.) are more likely to become in- 3.4.1. Self–AI integration at an individual level
tegrated within the interdependent's sel-view. This is in contrast to the Recent research highlights that AI agents with humanlike cues and
independent, who psychologically separates themselves rom other en- interactions using speech emotion recognition or sentiment analysis
tities and relies on personal uniqueness. Formally: techniques (e.g., Alexa, Siri) (Huang and Rust, 2021; Schuller, 2018)
may prompt positive customer engagement (Hollebeek et al., 2021;
P8: Interdependents (as opposed to independents) are more likely to McLeay et al., 2021; Xiao and Kumar, 2021). While engagement is a
undergo the process o sel–AI integration with those anthro- multidimensional concept (Brodie et al., 2011) comprising cognitive,
pomorphised AI agents that they perceive as congruent to behavioural and emotional responses, we highlight that users who
themselves. experience sel–AI integration are likely to engage with AI through a
particularly prominent emotional connection. As specied in social
3.3.3.2. Social exclusion. Research in psychology and anthropomor- psychology, namely by Aron et al. (2013), including others in one's sel-
phism postulates that human social behaviour is motivated by the need
to belong (Baumeister and Leary, 1995; Epley et al., 2007). Users that
anthropomorphise non-human entities are more likely to view them as a
social aliation partner (Chen et al., 2017). This can be particularly
prevalent among those that experience social exclusion. Social exclusion
stems rom incidents where people eel let out or rejected in their
environment, or example in relationships with amily members, riends
and colleagues (Baumeister et al., 2005). Thereore, to satisy their need
or aliation, socially excluded consumers have been shown to accept
an anthropomorphised product more readily, as they perceive it to be an Fig. F. Block three: implications o sel–AI integration at individual, group and
societal levels.

11
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

concept is a predictor o strong emotional bonds. Users that integrate benecial group outcomes, critics ear the adverse outcomes o users
anthropomorphised AI agents as part o their sel-concept will thus be being too close to robots, as these might change their perceptions o
likely to connect emotionally with AI agents, consequently also social relationships with other humans. Some users are becoming more
expecting such agents to oer them a support system and ull their comortable interacting with digital assistants than with other humans
emotional expectations, as humans do in relationships (Baumeister and (Christakis, 2019). For instance, users are conessing certain personal
Leary, 1995). In such cases, AI agents can provide social support not matters that they would not share with their spouses to digital assistants,
only in extreme lie experiences, such as losing a loved one, but also in such as Google Assistant. This could threaten the depth o human con-
everyday events. AI could provide important mental health resources (e. nections and thus make individuals more sel-involved and less empa-
g., Replika helping its users to manage anxiety) (Moyle et al., 2013), thetic. According to the sel-expansion theory (Aron and Aron, 1986),
which can ampliy an emotional connection (Van Doorn et al., 2017). individuals eel more connected to partners in close relationships, as
We propose: they eel these validate key aspects o themselves that might otherwise
be ignored (Leary, 2007). Thus, the emotional support that is provided
P10: At an individual level, sel–AI integration acilitates building by AI agents to users might make the users' relationships with other
emotional connections with AI agents. humans appear shallow. Brandtzaeg and Følstad (2017) also highlight
that people with a greater need to belong could build social relationships
Using companion robots has become even more prominent during with robots and turn away rom relationships with humans. This would
the COVID-19 pandemic, as people were deprived o social connections activate the “illusion o a relationship” while undermining the richness
(Jecker, 2021) and at the same time reported an increased trust in o human interactions (Bryson, 2010; Turkle, 2007). The eect is likely
technology (PRNewsWire, 2021). However, as recently highlighted by to be stronger during and ater societal challenges, such as COVID-19,
Puntoni et al. (2021), “exploited consumers” are losing their control when people eel more connected to anthropomorphic robots that are
over personal inormation when disclosing it to AI agents. Importantly, amiliar and can help them in their daily tasks (e.g., digital assistants
vulnerable people that are socially less privileged are at higher risk o that set reminders, provide entertainment and companionship) (Jecker,
experiencing such privacy invasions. Such individuals would regard AI 2021). The counterarguments o these criticisms state that people who
agents as sources or social aliation but are less likely to know how to interact with virtual entities already know that the virtual entity is not
protect themselves rom privacy threats. real and have the reedom not to use it (Coeckelbergh, 2012). We
When sel–AI integration occurs, users are more likely to open up to propose:
their companions and disclose their personal inormation (Aron et al.,
2013); this may raise privacy concerns. While electronic devices collect P13: At the group level, sel–AI integration may make users less
personal inormation about users, the kind o inormation disclosed to empathetic and reciprocal in their relationships with other humans.
robot riends is more intimate (Jecker, 2021). For example, or mental
health AI agents to build relationships and help users to overcome 3.4.3. Self–AI integration at a societal level
anxiety or loneliness, users must share their personal inormation to Current debates around disruptive technologies are asking to what
reap the ultimate benets o these tools. Arguably, anthropomorphic extent technologies such as AI are contributing to a “good society”
technologies, such as chatbots, nudge consumers to sel-disclose because (Wamba et al., 2021) and bringing long-term societal benets (Kamol-
the social cues (e.g., gender, personality, or race) acilitate social in- sook et al., 2019). Humanlike AI can address global challenges, such as
teractions (Grewal et al., 2020; Thomaz et al., 2020; Toosi et al., 2012). people's loneliness (Ramadan et al., 2021), and provide accurate
This may raise ethical issues, such as invasion o privacy as a conse- healthcare diagnostics (Dwivedi et al., 2021; Sidner et al., 2018). Social
quence o AI and robotics usage (Mao et al., 2020). The perceived AI agents can enhance users' well-being and eeling o connectedness in
comort and closeness that users experience with AI agents imposes a a community by helping them to prompt conversations in ofine and
greater risk to the user's privacy because the inormation they share with online environments (Jeong et al., 2018; Prescott and Robillard, 2021).
AI agents is digitally stored and may be misused (Jecker, 2021; Melumad I AI is integrated into the sel-concept, will that inhibit or contribute to
and Meyer, 2020). We propose: society's well-being?
We suggest that sel–AI integration promotes societal well-being, as
P11: At an individual level, sel–AI integration leads to risks o sel- close relationships with anthropomorphised agents allow users to trust
disclosure and invasion o privacy. such agents to carry out everyday tasks and also because psychological
closeness generates higher trust (Valcke et al., 2020). Users that eel
3.4.2. Self–AI integration at a group level connected to thinking or mechanical AI will trust these intelligent as-
The sel-expansion model states that users integrate others into their sistants to manage repetitive tasks (Huang and Rust, 2021), and this will
sel-concept when they perceive them as social resources that they can allow the individuals to ocus on more creative orms o work (Forbes,
connect with (Aron et al., 2004). Prior literature ocused on how hu- 2021). For instance, AI assistants can be responding to students (e.g.,
manlike companion robots are perceived as partners in “interpersonal” IBM's Jill Watson), setting budgets or providing nancial advice (e.g.,
relationships that can lessen one's loneliness (Ta et al., 2020). Recent Cleo). Moreover, eeling AI can oer users empathy also because it is
studies suggest that users who are connected to a social robot experience capable o learning rom previous interactions (e.g., Sophia the chatbot;
more community engagement and positive emotions in social in- Huang and Rust, 2021). In service settings, the satisaction that users
teractions (Jeong et al., 2018; Ostrowski et al., 2019). In these studies, experience with anthropomorphised AI agents that understand their
robots were perceived as companions, and users relied on the robots' needs (Choi et al., 2021) would oster social trust in these systems. In a
resources (i.e., making suggestions, oering inormation and connecting healthcare context, i users perceived AI-uelled devices and digital as-
users to their networks) to help them connect with other group members sistants who help users and monitor their treatments (e.g., AiCure) to be
by improving their communication skills. We thus argue that users' part o who they are, then such users could more easily rely upon such
integration o anthropomorphised AI that possesses social resources may AI. This would subsequently reduce the burden on medical sta and
act as a catalyst or social connection with group members. address the shortage o humans providing companionship and (health)
care services (Wyatt, 2020), thus also allowing or cost reduction in the
P12: At a group level, sel–AI integration allows users to build social public sector. I sel–AI integration occurs, such AI agents can be viewed
connections by improving community ties. as a good replacement or a human carer in a healthcare context.

Although integrating the AI agents into one's sel may contribute to

12
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

P14: Sel–AI integration could contribute to societal well-being, as: explain how users can meaningully relate to such AI agents via their
a) users would delegate more tasks to AI agents and retain their sel-concept and build relationships with them. Crucially, some prior
cognitive resources or more meaningul pursuits; and b) AI could studies that ocused on human reactions to robots and AI systems
deliver important societal tasks and services. remained general in terms o the type o anthropomorphism that such
systems displayed (Dwivedi et al., 2021; Xiao and Kumar, 2021; Yoga-
The advantages o anthropomorphic AI agents within dierent nathan et al., 2021). This paper, in contrast, highlights that dierent
realms are counteracted by undamental ethical issues. McLeay et al. types o anthropomorphism (based on humanlike physical, emotional or
(2021) postulate that humanoid AI agents are negatively perceived by personality traits) as established in prior research (MacInnis and Folkes,
users that have stronger preerences or ethical/responsible providers. 2017) lead to sel-congruence in dierent manners. As such, we un-
When users perceive these agents as part o their identity and delegate a derline the importance o accounting or the complexity o AI anthro-
signicant proportion o their daily tasks to them, their decision-making pomorphism in urther studies that will build on our and others' related
abilities may decline, as excessive attachment to AI agents makes people research (e.g., Yoganathan et al., 2021).
rely on these agents or opinions and ideas. At the societal level, this Further, we speciy which user-related and situational traits mod-
could cause a phenomenon described as “digital dementia”: weakened erate these relationships, validating some o the prior work in this area,
mental abilities o users when perorming those tasks that have been but also adding to the list o potential moderating actors that represent
outsourced to AI agents as a result o over-reliance on technology boundary conditions or the eects o AI. Previous research, or instance,
(Dossey, 2014). This could potentially result in a society where its investigated the moderating role o personality traits, such as McCrae
members require more support to make their own daily decisions or to et al.'s (1996) Big Five, on human–robot relationship outcomes (Kaplan
orm opinions regarding key issues, such as climate change and eco- et al., 2019), and our ramework conrms the importance o extraver-
nomic development. AI agents are trained to share particular views on sion/introversion in relation to the eect o anthropomorphised AI on
these topics with users (Walz and Firth-Buttereld, 2018). When users sel-congruence. In addition, we identiy users' innovativeness (Melián-
consider robots as an integral part o their sel-concepts, they are likely González et al., 2021) and need to belong (Brandtzaeg and Følstad,
to be less critical o them and rely on them more. This would allow AI to 2017) as key moderating actors o the eects o anthropomorphised AI
have a strong infuence on public opinion (Walz and Firth-Buttereld, on sel-congruence. This is also in response to Choi et al. (2021), who
2018). Formally: highlighted the importance o users' innovativeness in the context o
P15: At a societal level, sel–AI integration could lead to digital de- user–AI relationships. Importantly, these key user-related traits can be
mentia and decreased decision-making capabilities due to over-reliance considered as important moderating actors or other related rame-
on AI agents. works, such as the one proposed by Xiao and Kumar (2021). In addition
to technology readiness and demographics as user characteristics, which
4. Discussion and conclusion Xiao and Kumar (2021) identied as crucial or contributing to robot
acceptance, the moderating actors identied in our paper are likely to
In this paper, we conceptualise how users might relate to dierent be o relevance or user interactions with anthropomorphised robots.
types o AI anthropomorphism (i.e., physical, personality, emotional) on Second, we propose a novel concept: sel–AI integration. This unveils
the basis o their sel-concept. Specically, we propose that anthro- the process o incorporating an anthropomorphised AI agent into one's
pomorphised AI will make users experience sel-congruence with such sel-schema as a result o experiencing a cognitive match between AI and
AI agents, and in some cases even sel–AI integration, which we propose the sel, i.e., sel-congruence. While prior research showed that con-
as a new concept in this paper. We also highlight a number o potential sumers integrate objects or brands into their sel-concept (Belk, 1988;
user-related and situational actors that may moderate these relation- Gill-Simmen et al., 2018), we propose here that this integration may also
ships. Finally, we explain that these psychological processes lead to a occur with anthropomorphised AI agents. Importantly though, users
variety o consequences at the personal, group and societal levels. We experience sel-congruence with anthropomorphised AI in a dierent
postulate teen propositions about these specic relationships between manner than with other product categories (MacInnis and Folkes, 2017).
users and anthropomorphised AI agents and propose a conceptual This is because AI agents carry a wide set o resources (e.g., inorma-
ramework on this basis. In doing so, we respond to research calls to tional, emotional, etc.) that users can view as aligned with themselves.
investigate the psychological consequences o users' associations with Users also attribute social meanings to AI agents, who then act as a
anthropomorphised AI (MacInnis and Folkes, 2017; McLeay et al., source o social aliation (Escalas and Bettman, 2005), allowing users
2021). Below, we discuss the three main contributions o our work and to project their identity onto such AI. In turn, the social meanings
put orward a research agenda that highlights the need or uture conveyed by AI agents can be incorporated into the user's sel-concept to
research in this eld. urther enhance their sense o identity. As such, our work extends the
sel-expansion theory (Aron and Aron, 1986) in the area o AI and
4.1. Theoretical contributions contributes to the stream o studies that show how people integrate
inanimate objects into their sel-concept (Delgado-Ballester et al., 2017,
First, this paper advances the literature on anthropomorphism by 2019; Troye and Supphellen, 2012). Furthermore, building on Kwak
conceptualising the eects o dierent types o anthropomorphised AI et al.'s (2017) study, which showed how users evaluate anthro-
on sel-concept and, more specically, on sel-congruence. Recent work pomorphised brands based on their sel-construal, we conceptualise that
examined how anthropomorphised entities, such as brands, products or users will integrate AI-related resources into their sel-concept dier-
AI, aect customer engagement (Lu et al., 2019), robot adoption (Xiao ently, depending on moderating actors such as sel-construal. In-
and Kumar, 2021) and customer satisaction (Choi et al., 2021), but terdependents will look at anthropomorphised AI agents as resources
there has not yet been an investigation o the eects o anthropomorphic that enhance their social belonging or social support, while in-
AI on users' sel-concept. This research explains how the anthro- dependents will look or the anthropomorphised cues that would signal
pomorphised AI agents' physical, personality and emotional traits make their uniqueness to others. The related propositions identiy specic
users experience sel-congruence with such AI agents because they mechanisms that underpin the process o sel–AI integration.
perceive the agent's traits to be similar to their own. We respond to Finally, we propose that sel-congruence and sel-integration may
research calls to understand users' reactions to eeling AI agents and the mediate the eects o anthropomorphised AI on a variety o outcomes at
relationships that emerge between users and AI in such instances (Huang users' personal, group and societal levels. While prior literature identi-
and Rust, 2021). Our study draws attention to the psychological pro- ed a range o consumer responses that may emerge rom interactions
cesses o sel-congruence and sel–AI integration as key concepts that with anthropomorphised objects (Mende et al., 2019; Unal et al., 2018),

13
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

the role o sel-congruence and the integration o anthropomorphised AI intelligence.


in mediating these eects were not considered. For instance, some Moreover, marketers should collect data on how their target cus-
studies specied that humanlike AI would help to address social chal- tomers perceive anthropomorphised agents, given that the lack o a-
lenges such as loneliness (Dwivedi et al., 2021) or would improve the miliarity and negative individual knowledge o AI might halt sel-
perception that the robots possessed warmth and competence (Yoga- congruence and hinder a closer bond between anthropomorphised AI
nathan et al., 2021) but did not account or sel-related experiences. This and consumers. To rectiy the negative perceptions o robots, marketing
research postulates that by acilitating the eeling o “social presence” departments should emphasise agents' potential benets, or example
provided by robots (Araujo, 2018; Wirtz et al., 2018), users will be able how AI agents can help users to improve their liestyles by providing
to orm stronger emotional connections with these agents. We also add them with greater eciency. Managers can work on recrating narra-
to prior work that discusses how robots can shape experience in com- tives around AI, rom the “Terminator-style” imagery to more realistic
munities (Ostrowski et al., 2019) and societies (Mao et al., 2020; Wyatt, and appealing representations. One approach that may help to reshape
2020) by explaining how sel–AI integration can change the way users perceptions o AI agents is to portray their mundane nature as virtual
behave, not only towards themselves but towards others. For instance, assistants to users in their everyday activities.
users can rely on AI agents to help them learn how to connect socially To achieve sel-congruence and sel–AI integration, users o dierent
with others and generate positive social interactions (Jeong et al., 2018). sel-construal (i.e., independent vs. interdependent) should be treated
However, this work highlights that the eects o sel–AI integration are dierently. Practitioners should provide AI-related experiences that
not always benecial and contributes to prior critical literature on the enhance a user's sense o individuality or highlight their group-image
potential negative eects o technologies (Loveridge and Street, 2005). identity. For instance, managers that are marketing AI agents rom a
Ethical and moral challenges may arise as a result, or example increased relational standpoint can enhance their advertising promotional mate-
risk o invasion o privacy and a threat o less empathetic relationships rial by highlighting the social roles o their AI agents (Huang and Rust,
(Christakis, 2019), as well as a potential increase in digital dementia 2018). Examples o branding elements that signal close relationships
(Dossey, 2014). This range o outcomes underlines how important it is to include Replika, an emotional AI agent with the slogan “The AI Com-
address sel–AI integration, which may result rom interactions with panion Who Cares”. By tapping into consumers' social and relational
anthropomorphised AI and sel-congruence with AI agents. To our needs in such a manner (Wirtz et al., 2018), AI agents can be positioned
knowledge, this is the rst theoretical ramework that highlights the in the mind o the consumer in proximity to their sel-concept. By
complexity and relevance o the eects that anthropomorphised AI may achieving that, brands would be able to increase their customer lietime
create in relation to the sel-concept. Understanding these processes also value (Huang and Rust, 2021). Moreover, the relationship with cus-
uncovers urther research questions that are outlined in the research tomers can be deepened by adopting hyper-personalisation marketing
agenda (see Section 4.3). strategies that create targeted experiences through the use o AI data
sources, which may increase customer satisaction, brand loyalty and
4.2. Managerial implications willingness to spend (Deloitte, 2020). This is relevant in the case o AI
agents such as chatbots, which learn rom users' behaviour and deliver
The prevalence o service robots and AI systems in consumers' lives personalised services.
heightens opportunities or businesses to adopt such services in order to The widespread use o anthropomorphised AI agents in contempo-
optimise their resources (i.e., costs and time) (Ivanov and Webster, rary consumer markets paves the way or intimate relationships between
2017). Our study highlights how users might deepen their relationships AI and consumers (Puntoni et al., 2021). Managers may highlight the
with anthropomorphised agents due to their humanlike cues. Such social roles that these AI agents can occupy to provide users with a sense
proximity could allow managers to achieve certain business targets, o belonging and prompt emotional connection. Projecting these AI
such as high engagement (Huang and Rust, 2021); however, these re- agents as resources to tackle issues such as mental health issues and
lationships can also have drawbacks at the personal, group and societal personal growth may help managers to strengthen the user–AI bond.
levels. We explain how practitioners can translate the insights o our However, managers should be aware that the potential gains are
paper to actionable takeaways. accompanied by ethical challenge(s). The opportunities associated with
The proposed conceptual ramework shows that the dynamics o sel- AI may be counteracted by weakened human relationships, invasions o
congruence and sel–AI integration are not only associated with AI privacy and digital dementia. I consumers establish an intimate bond
agents' physical anthropomorphic appearance but also encompass other with AI agents as a result o sel-integration, they will potentially credit
dimensions, such as personality and emotional traits. This is critical to these agents with excessive trust, which would risk their privacy and
the service industry, which leverages anthropomorphised AI agents not could harm their relationships with their surroundings. To oset such
only to establish consumer trust but also to strengthen the user–robot challenges, managers should invest in privacy-sensitive robots and
bond (Blut et al., 2021). While managers should continue to improve continuously educate consumers about potential adverse eects.
design eatures o anthropomorphised AI-enabled products to orm
strong connections with users, there is no one-size-ts-all approach. To 4.3. Limitations and research agenda
account or dierences among their customers, managers should
consider leveraging AI algorithms, such as deep learning, that allow The present research and ramework extends current theoretical
virtual agents to learn the user's personality traits through continuous knowledge on anthropomorphism and AI, thus opening avenues or
interactions and to adapt accordingly. For instance, to build social others to conduct urther research on this topic. We speciy which
connections with extraverted customers or those with a high need to theoretical approaches can advance the knowledge in this area and
belong, the introduced AI agents should be able to communicate with which aspects call or urther investigation. Also, as the main limitation
more social cues. Examples o such agents include Amelia, an AI bot that o this paper is the theoretical nature o the proposed concepts and re-
was employed by the UK's Eneld Council to interpret the questions o lationships, we discuss below how this can be overcome through
local residents and tailor its reactions according to users' personalities empirical research. We organise the uture directions within our pro-
(Vogl et al., 2020). Such changes would increase the perceived useul- posed research agenda in three areas – theory, context and
ness and ease o use o AI agents, contributing to higher service quality methodology.
and increased company prots (Xiao and Kumar, 2021). Further, to
appeal to more innovative individuals, AI agents should educate and 4.3.1. Future directions – theory
inorm their users about the system's advanced capabilities, their tech- We recommend that uture research considers enhancing the present
nical sophistication and how such agents can replicate human conceptual model via more in-depth exploration o the various concepts

14
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

and relationships and via the integration o additional cross-disciplinary Table C


concepts and/or theories rom psychology, consumer behaviour and Research agenda or theoretical advances.
human-robot interaction. For example, uture work could consider how Construct Research Questions
the emotional responses that emerge during interactions with anthro-
Anthropomorphism What emotions are generated by the user–AI interaction and
pomorphised AI might strengthen or hinder sel–AI integration. This how is the sel–AI integration built or strengthened by these?
could be approached via the Pleasure-Arousal-Dominance model How might the gender aspects aect consumers' interactions
(Mehrabian and Russell, 1974). Additionally, uture work could with and perceptions o AI?
consider the role o AI gender and integrate perspectives rom research How do gender-specic characteristics o AI agents harm
societal values, such as equality and diversity?
on gender bias against AI systems (Diederich et al., 2022; West et al., Does the phenomenon o digital domesticity aect users'
2019) and unconscious discrimination, such as the recently identied perceptions o anthropomorphised AI?
phenomenon o “digital domesticity” (Woods, 2018). This would be How can an AI agent tailor their interaction to an individual's
relevant to explore given that current AI deployment shows a preerence traits to drive behavioural outcomes?
What are the eects o users co-creating their AI agent traits to
towards imbuing AI agents with emale characteristics (e.g., name,
t their preerences?
voice). Moreover, uture research should also consider how users would Sel-Congruence Can trust in anthropomorphised AI agents be promoted
experience congruence with gender-neutral robots and i this process through sel-congruence?
changes depending on the user's gender identication. Such research What are the eects o a user's sel-congruence with AI agents
should be inclusive and consider the ull range o gender identities (e.g., on their human-to-human relationships? Would this lead to
dehumanisation?
cisgender, transgender, non-binary). Which type o AI anthropomorphism is strongest in
It would be important to consider how the process o personalisation stimulating sel-congruence?
aects the relationships we propose in the ramework. Interactions with How important is the AI agent's role (e.g., riend, mentor,
consumers are becoming increasingly customised thanks to machine companion) in heightening the user's sel-congruence?
Sel-Construal How does the role o AI products aect users with dierent
learning advancements, allowing AI agents to adapt their interactions to
sel-construal?
specic user proles. Future research should expand on dierent types Do products refecting values such as saety or belongingness
o personalisation (Huang and Rust, 2021) (e.g., related/matching aect the process o sel–AI integration?
consumer traits, emotions), as well as the dynamic between AI agents Individual Level What are the risks associated with data misuse by AI agents
and the user in co-creating personalised interactions (Knote et al., 2020), that collect user inormation, and how do they aect user
relationships with such agents?
to shed more light on the associated outcomes.
How does the level o involvement with certain types o AI
In addition, uture studies could try to ocus urther on key concepts agents (mechanical, thinking, eeling) aect the user's
rom the proposed ramework, i.e., sel-congruence and sel–AI inte- emotional connection?
gration. Further work could, or instance, examine whether sel-AI Can AI agents be seen as social actors that infuence children's
behaviour when used in the home setting?
integration is potentially a multidimensional concept. Research could
Conversely, would the use o some AI agents encourage rude
also be channelled towards urther unpacking the relationship between manners at home and beyond as a result o using a
these concepts and other relevant variables, such as trust, which commanding tone with AI agents?
human–robot interaction studies highlighted as an important aspect (De Group Level Does negative public opinion about AI agents aect users'
Visser et al., 2016; Robert et al., 2020). More insights are needed about social connection with them?
What are the ethical implications o interactions between
the role o trust/distrust between users and AI agents (Miller et al.,
anthropomorphised robots and vulnerable groups?
2021) and how sel-congruence might aect aspects o both trust- Societal Level Which anthropomorphic cues should marketers leverage to
building and trust repair. ampliy the eects o anthropomorphised AI agents on
Although the present study aims to cover an array o possible out- societal well-being?
When users develop an emotional attachment to AI agents,
comes o sel–AI integration at the personal, group and societal levels,
should the ormer be legally protected to prevent them rom
the outcomes remain to be examined urther and in more detail. One being harmed?
question that we consider particularly important is the potential impact Context How does the interaction with AI agents dier in public vs.
o sel–AI integration on human-to-human relationships and how exactly private contexts?
that can aect isolation and dehumanisation (Haslam, 2006). Table C What role does the cultural context play in uelling or
inhibiting user–AI relationships? How eective is public
presents the questions raised above and other important queries in this
opinion in infuencing user perceptions o humanlike AI?
theoretical domain.

4.3.2. Future directions – context to anthropomorphised robots may dier across users' cultural back-
The application o AI services (e.g., virtual agents, chatbots, service grounds (Epley et al., 2007; Fan et al., 2020). While some o the
robots) has penetrated dierent sectors (see Table A), but most studies contextual actors uelling sel–AI relationships might be external, other
have investigated AI anthropomorphism at a generic rather than a actors may be more closely related to the AI and its uses. Prior literature
contextual level (Diederich et al., 2022). Nonetheless, while investi- highlighted task type as the key actor that infuences the use o AI
gating the eects o anthropomorphic AI on users' psychological pro- (Huang and Rust, 2021; Xu et al., 2020). It is thus highly important that
cesses, researchers should account or the act that people might react uture research examines the potential moderating eect o task type on
and interact dierently depending on context (Zhang et al., 2019). the relationship between anthropomorphism and sel-congruence and/
Specically, we draw attention to potential changes between private or sel–AI integration.
and public contexts. Prior research notes that sel-congruence and We also highlight the importance o understanding better the eect
product/service evaluations can be aected by the public context o amiliarity in relation to emotional connection with anthro-
(Grae, 1996). However, many interactions with anthropomorphised AI pomorphised robots, as well as the potential risks o data abuse associ-
agents, such as Amazon's Alexa Home, take place in private settings, so ated with anthropomorphised AI use. Future research could examine
we suggest that uture research ocuses extensively on the private legal/regulatory and political actors or circumstances that might aect
context. public opinion and perceptions o AI, as well as the identity-related
Furthermore, the eects o anthropomorphism can depend on cul- concept that this paper highlights and its subsequent outcomes.
tural context, which is still underexplored in this area (Diederich et al.,
2022). Future cross-cultural comparisons may provide more depth to the 4.3.3. Future directions – methodology
societal outcomes we propose in our ramework, given that the reactions While our ramework makes several contributions at a theoretical

15
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

level by presenting the potential psychological processes that are Aggarwal, P., McGill, A.L., 2007. Is that car smiling at me? Schema congruity as a basis
or evaluating anthropomorphized products. J. Consum. Res. 34 (4), 468–479.
impacted by anthropomorphised AI, empirical insights and validation
Aggarwal, P., McGill, A.L., 2012. When brands seem human, do humans act like brands?
rom practitioners and researchers are required to examine these prop- Automatic behavioral priming eects o brand anthropomorphism. J. Consum. Res.
ositions urther. We oer some directions or uture empirical studies. 34 (2), 307–323.
The literature in this area is to some extent dominated by conceptual Aguirre-Rodriguez, A., Bosnjak, M., Sirgy, M.J., 2012. Moderators o the sel-congruity
eect on consumer decision-making: a meta-analysis. J. Bus. Res. 65 (8), 1179–1188.
papers (Huang and Rust, 2018, 2021; MacInnis and Folkes, 2017; Xiao Airenti, G., 2018. The development o anthropomorphism in interaction:
and Kumar, 2021) and quantitative studies using experiments or surveys intersubjectivity, imagination, and theory o mind. Front. Psychol. 9, 1664-1078.
(Longoni et al., 2019; Mende et al., 2019; Troye and Supphellen, 2012). Araujo, T., 2018. Living up to the chatbot hype: the infuence o anthropomorphic design
cues and communicative agency raming on conversational agent and company
Some o these experimental studies capture momentary use by asking perceptions. Comput. Hum. Behav. 85, 183–189.
respondents to imagine a scenario o their interaction with anthro- Aron, A., Aron, E.N., 1986. Love and the Expansion o Sel: Understanding Attraction and
pomorphised AI agents and analysing their behaviour accordingly Satisaction. Hemisphere Publishing Corp/Harper & Row Publishers.
Aron, A., McLaughlin-Volpe, T., Mashek, D., Lewandowski, G., Wright, S.C., Aron, E.N.,
(Yoganathan et al., 2021). Other studies use online-based reviews to 2004. Including others in the sel. Eur. Rev. Soc. Psychol. 15 (1), 101–132.
retrieve insights about their experience with AI agents (Ramadan et al., Aron, A., Lewandowski Jr., G.W., Mashek Jr., D., Aron Jr., E.N., 2013. The Sel-expansion
2021; Ta et al., 2020). However, the user–AI relationship, like any Model o Motivation and Cognition in Close Relationships. Oxord University Press.
Aw, E., Flynn, L.R., Chong, H.X., 2019. Antecedents and consequences o sel-congruity:
relationship, is expected to evolve over time. We thereore suggest that replication and extension. J. Consum. Mark. 36 (1), 102–112.
uture research employs longitudinal studies to monitor the changes in Baumeister, R., Leary, M.R., 1995. The need to belong: desire or interpersonal
the relationship over time, also in relation to sel–AI integration. Qual- attachments as a undamental human motivation. Psychol. Bull. 117 (3), 497–529.
Baumeister, R., DeWall, C.N., Ciarocco, N.J., Twenge, J.M., 2005. Social exclusion
itative studies on user–AI relationships are underrepresented (see
impairs sel-regulation. J. Pers. Soc. Psychol. 88 (4), 589–604.
Ramadan et al., 2021; Ta et al., 2020). Users undergo unique experi- Belk, R., 1988. Possessions and the extended sel. J. Consum. Res. 15 (2), 139–168.
ences specic to their individual contexts (Ramadan et al., 2021; Wang Besta, T., 2018. Independent and interdependent? Agentic and communal? Sel-
and Krumhuber, 2018) and personal characteristics, which qualitative construals o people used with a group. Ann. Psychol. 34 (1), 123–134.
Bishop, L., van Maris, A., Dogramadzi, S., Zook, N., 2019. Social robots: the infuence o
studies (e.g., ocus groups, in-depth interviews, or ethnography) could human and robot characteristics on acceptance. Paladyn, J. Behav. Robot. 10 (1),
uncover. Such studies can employ methods specic to certain AI uses (e. 346–358.
g., individual or household interviews) to provide detailed insights into Blut, M., Wang, C., Wünderlich, N.V., Brock, C., 2021. Understanding anthropomorphism
in service provision: a meta-analysis o physical robots, chatbots, and other AI.
how complex processes o sel-congruence and sel–AI integration might J. Acad. Mark. Sci. 49, 632–658.
change across individual and societal contexts. Brandtzaeg, P., Følstad, A., 2017. Why people use chatbots. In: Proceeedings o the
Another key aspect that deserves attention is the measurement o the International Conerence on Internet Science—INSCI. Springer, pp. 377–392.
Brodie, R., Hollebeek, L.D., Jurić, B., Ilić, A., 2011. Customer engagement: conceptual
anthropomorphism construct. Given that anthropomorphism could be domain, undamental propositions, and implications or research. J. Serv. Res. 17
applicable to any non-human entity (Epley, 2018), careul consideration (3), 252–271.
o existing multidimensional scales is needed. As AI agents continue to Bryson, J., 2010. Robots should be slaves. Close engagements with articial companions:
key social, psychological. In: Wilks (Ed.), Ethical and Design Issue. John Benjamin,
be improved, their anthropomorphism also evolves in terms o dierent Amsterdam.
cues. Future research should ocus on developing adequate measure- Büyükdağ, N., Kitapci, O., 2021. Antecedents o consumer-brand identication in terms
ments or these new cues or characteristics, as existing scales cannot o belonging brands. J. Retail. Consum. Serv. 59, 102–420.
Caggioni, L., 2019. Project DIVA: making the google assistant more accessible [Online].
always be adapted appropriately or might capture only limited aspects
Available at: https://experiments.withgoogle.com/project-diva. (Accessed 29
o anthropomorphism. Moreover, AI agents are a diverse category. September 2021).
Chatbots, or example, do not exhibit acial expressions as social robots Calvo-Barajas, N., Perugia, G., Castellano, G., 2020. The eects o robot’s acial
do, rather relying on other identity cues that are vital to a chatbot's expressions on children’s rst impressions o trustworthiness. In: 2020 29th IEEE
International Conerence on Robot and Human Interactive Communication (RO-
perormance (Go and Sundar, 2019; see also Sheehan et al., 2020). MAN). IEEE, pp. 165–171.
Measurement scales that encompass the multidimensionality o Cambridge Consultants, 2019. Use o AI in online content moderation. 2019 report
anthropomorphism and its various contextual applications are thus an produced on behal o Ocom [Online]. Available at: https://www.ocom.org.uk/
__data/assets/pd_le/0028/157249/cambridge-consultants-ai-content-moderation.
important requirement or uture quantitative research. pd. (Accessed 21 January 2021).
In conclusion, our paper studies the potential eects o anthro- Capatina, A., Kachour, M., Lichy, J., Micu, A., Micu, A.-E., Codignola, F., 2020. Matching
pomorphised AI agents on user sel-concept. We assess positive and the uture capabilities o an articial intelligence-based sotware or social media
marketing with potential users’ expectations. Technol. Forecast. Soc. Chang. 151 (C),
negative outcomes o sel–AI integration, while considering the possible 119794.
moderators o this relationship. The proposed conceptual ramework Cerekovic, A., Aran, O., Gatica-Perez, D., 2016. Rapport with virtual agents: what do
should help uture research to unveil novel eects o these prevalent human social cues and personality explain? IEEE Trans. Aect. Comput. 8 (3),
382–395.
anthropomorphised agents on salient aspects o our daily lives. We Chen, Y., Nelson, L.D., Hsu, M., 2015. From “where” to “what”: distributed
complement our conceptual ramework with a research agenda that representations o brand associations in the human brain. J. Mark. Res. 52 (4),
addresses our key concepts rom theoretical, contextual and methodo- 453–466.
Chen, R., Wan, E.W., Levy, E., 2017. The eect o social exclusion on consumer
logical perspectives. In doing so, we speciy the urther work that would
preerence or anthropomorphized brands. J. Consum. Psychol. 27 (1), 23–34.
help to enrich our understanding o the user–AI relationship. Choi, S., Mattila, A.S., Bolton, L.E., 2021. To err is human (-oid): how do consumers react
to robot service ailure and recovery? J. Serv. Res. 24 (3), 354–371.
Christakis, N., 2019. How AI will rewire us. The Atlantic. Available at: https://www.
CRediT authorship contribution statement theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204/.
(Accessed 26 February 2021).
Conceptualisation: Amani Alabed, Ana Javornik, Diana Gregory- Coeckelbergh, M., 2012. Care robots, virtual virtue, and the best possible lie. In:
Brey, P., Briggle, A., Spence, E. (Eds.), The Good Lie in a Technological Age. Taylor
Smith; Writing - Original Drat: Amani Alabed, Ana Javornik, Diana
& Francis, Abingdon.
Gregory-Smith; Writing - Review & Editing - Amani Alabed, Ana Jav- Collins, N., Read, S.J., 1990. Adult attachment, working models, and relationship quality
ornik; Visualisation: Amani Alabed; Supervision: Ana Javornik, Diana in dating couples. J. Pers. Soc. Psychol. 58 (4), 644–664.
Cross, S., Gore, J.S., Morris, M.L., 2003. The relational-interdependent sel-construal,
Gregory-Smith; Funding: Diana Gregory-Smith.
sel-concept consistency, and well-being. J. Pers. Soc. Psychol. 85 (5), 933–944.
Darling, K., 2016. Extending legal protection to social robots: the eects o
References anthropomorphism, empathy, and violent behavior towards robotic objects. In:
Robot Law. Edward Elgar Publishing.
Darling, K., Nandy, P., Breazeal, C., 2015. Empathic concern and the eect o stories in
Aaker, J., Fournier, S., Brasel, S.A., 2004. When good brands do bad. J. Consum. Res. 31
human-robot interaction. In: 2015 24th IEEE International Symposium on Robot and
(1), 1–16.
Human Interactive Communication (RO-MAN). IEEE, pp. 770–775.
Abosag, I., Ramadan, Z.B., Baker, T., Jin, Z., 2020. Customers’ need or uniqueness
Davenport, T., Ronanki, R., 2018. Articial intelligence or the real world. Harv. Bus.
theory versus brand congruence theory: the impact on satisaction with social
Rev. 96 (1), 108–116.
network sites. J. Bus. Res. 117 (C), 862–872.

16
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

Davenport, T., Guha, A., Grewal, D., Bressgott, T., 2020. How articial intelligence will Grand View Research, 2022. $1,811.8 [Online]. Available at: https://www.grandviewres
change the uture o marketing. J. Acad. Mark. Sci. 48 (1), 24–42. earch.com/press-release/global-articial-intelligence-ai-market. (Accessed 30 May
De Visser, E., Monort, S.S., McKendrick, R., Smith, M.A., McKnight, P.E., Krueger, F., 2022).
Parasuraman, R., 2016. Almost human: anthropomorphism increases trust resilience Grewal, D., Hulland, J., Kopalle, P.K., Karahanna, E., 2020. The uture o technology and
in cognitive agents. J. Exp. Psychol. Appl. 22 (3), 331–349. marketing: a multidisciplinary perspective. J. Acad. Mark. Sci. 48 (1), 1–8.
Delgado-Ballester, E., Palazón, M., Pelaez, J., 2017. This anthropomorphised brand is so Grohmann, B., 2009. Gender dimensions o brand personality. J. Mark. Res. 46 (1),
loveable: the role o sel-brand integration. Span. J. Mark. 21 (2), 89–101. 105–119.
Delgado-Ballester, E., Palazón, M., Peláez, J., 2019. Anthropomorphized vs objectied Guthrie, S., 1995. Faces in the Clouds: A New Theory o Religion. Oxord University
brands: which brand version is more loved? Eur. J. Manag. Bus. Econ. 29 (2), Press.
150–165. Haener, N., Wincent, J., Parida, V., Gassmann, O., 2021. Articial intelligence and
Deloitte, 2018. Chatbots point o view. deloitte articial intelligence [online]. Available innovation management: a review, ramework, and research agenda. Technol.
at: https://www2.deloitte.com/content/dam/Deloitte/nl/Documents/deloitte-an Forecast. Soc. Chang. 162 (C), 120392.
alytics/deloitte-nl-chatbots-moving-beyond-the-hype.pd. (Accessed 20 January Haslam, N., 2006. Dehumanization: an integrative review. Personal. Soc. Psychol. Rev.
2021). 10 (3), 252–264.
Deloitte, 2020. Connecting with meaning Hyper-personalizing the customer experience Hoeornamjoon, 2021. He’s just like me [Online]. Available at: https://www.reddit.
using data, analytics, and AI [Online]. Available at: https://www2.deloitte. com/r/replika/comments/n2ehr7/hes_just_like_me/. (Accessed 30 September
com/content/dam/Deloitte/ca/Documents/deloitte-analytics/ca-en-omnia-ai-mar 2021).
keting-pov-n-jun24-aoda.pd. (Accessed 20 January 2021). Hollebeek, L., Sprott, D.E., Brady, M.K., 2021. Rise o the machines? Customer
Derrick, J., Gabriel, S., Hugenberg, K., 2009. Social surrogacy: how avored television engagement in automated service interactions. J. Serv. Res. 24 (1), 3–8.
programs provide the experience o belonging. J. Exp. Soc. Psychol. 45 (2), 352–362. Hollenbeck, C., Kaikati, A.M., 2012. Consumers’ use o brands to refect their actual and
Diederich, S., Brendel, A.B., Morana, S., Kolbe, L., 2022. On the design o and interaction ideal selves on Facebook. Int. J. Res. Mark. 29 (4), 395–405.
with conversational agents: an organizing and assessing review o human-computer Houghton, D., Pressey, A., Istanbulluoglu, D., 2020. Who needs social networking? An
interaction research. J. Assoc. In. Syst. 23 (1), 96–138. empirical enquiry into the capability o Facebook to meet human needs and
Dietvorst, B., Simmons, J.P., Massey, C., 2018. Overcoming algorithm aversion: people satisaction with lie. Comput. Hum. Behav. 104, 106153.
will use imperect algorithms i they can (even slightly) modiy them. Manag. Sci. 64 Huang, M., Rust, R.T., 2018. Articial intelligence in service. J. Serv. Res. 21 (2),
(3), 1155–1170. 155–172.
Dossey, L., 2014. FOMO, digital dementia, and our dangerous experiment. Explore 10 Huang, M., Rust, R.T., 2021. Engaged to a robot? The role o AI in service. J. Serv. Res.
(2), 69–73. 24 (1), 30–41.
Duclos, R., Barasch, A., 2014. Prosocial behavior in intergroup relations: how donor sel- Huet, E., 2016. Pushing the boundaries o AI to talk to the dead. Bloomberg, October, 20.
construal and recipient group-membership shape generosity. J. Consum. Res. 41 (1), Available at: https://www.bloomberg.com/news/articles/2016-10-20/pushing-the-
93–108. boundaries-o-ai-to-talk-to-the-dead. (Accessed 30 September 2021).
Duggan, G.B., 2016. Applying psychology to understand relationships with technology: Ivaldi, S., Leort, S., Peters, J., Chetouani, M., Provasi, J., Zibetti, E., 2017. Towards
rom ELIZA to interactive healthcare. Behav. Inorm. Technol. 35 (7), 536–547. engagement models that consider individual actors in HRI: on the relation o
Dwivedi, Y., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., extroversion and negative attitude towards robots to gaze and speech during a
Dwivedi, R., Edwards, J., Eirug, A., 2021. Articial intelligence (AI): human–robot assembly task. Int. J. Soc. Robot. 9 (1), 63–86.
multidisciplinary perspectives on emerging challenges, opportunities, and agenda Ivanov, H., Webster, C., 2017. Adoption o robots, articial intelligence and service
or research, practice and policy. Int. J. In. Manag. 57, 101994. automation by travel, tourism and hospitality companies–a cost-benet analysis,
Epley, N., 2018. A mind like mine: the exceptionally ordinary underpinnings o articial intelligence and service automation by travel, tourism and hospitality
anthropomorphism. J. Assoc. Consum. Res. 3 (4), 591–598. companies–a cost-benet. Analysis 1501–1517.
Epley, N., Waytz, A., Cacioppo, J.T., 2007. On seeing human: a three-actor theory o Javornik, A., Marder, B., Pizzetti, M., Warlop, L., 2021. Augmented sel-the eects o
anthropomorphism. Psychol. Rev. 114 (4), 864–886. virtual ace augmentation on consumers’ sel-concept. J. Bus. Res. 130 (1), 170–187.
Escalas, J., Bettman, J.R., 2003. You are what they eat: the infuence o reerence groups Jecker, N., 2021. You’ve got a riend in me: sociable robots or older adults in an age o
on consumers’ connections to brands. J. Consum. Psychol. 13 (3), 339–348. global pandemics. Ethics In. Technol. 23 (1), 35–43.
Escalas, J., Bettman, J.R., 2005. Sel-construal, reerence groups, and brand meaning. Jeong, K., Sung, J., Lee, H.-S., Kim, A., Kim, H., Park, C., Jeong, Y., Lee, J., Kim, J., 2018.
J. Consum. Res. 32 (3), 378–389. Fribo: a social networking robot or increasing social connectedness through sharing
Fadhil, A., Schiavo, G., Wang, Y., Yilma, B.A., 2018. The eect o emojis when daily home activities rom living noise data. In: Proceedings o the 2018 ACM/IEEE
interacting with conversational interace assisted health coaching system. In: International Conerence on Human-Robot Interaction, pp. 114–122.
Proceedings o the 12th EAI International Conerence on Pervasive Computing Kamolsook, A., Badir, Y.F., Frank, B., 2019. Consumers’ switching to disruptive
Technologies or Healthcare, pp. 378–383. technology products: the roles o comparative economic value and technology type.
Fan, A., Wu, L., Miao, L., Mattila, A.S., 2020. When does technology anthropomorphism Technol. Forecast. Soc. Chang. 140 (C), 328–340.
help alleviate customer dissatisaction ater a service ailure?–The moderating role o Kaplan, A.A.D., Sanders, T., Hancock, P.A., 2019. The relationship between extroversion
consumer technology sel-ecacy and interdependent sel-construal. J. Hosp. Mark. and the tendency to anthropomorphize robots: a Bayesian analysis. Front. Robot. AI
Manag. 29 (3), 269–290. 5 (1), 135.
Fennis, B., Pruyn, A.T.H., 2007. You are what you wear: brand personality infuences on Kaptein, M., Nass, C.I., Markopoulos, P., 2014. The eects o amiliarity and similarity on
consumer impression ormation. J. Bus. Res. 60 (6), 634–639. compliance in social networks. Int. J. Internet Mark. Advert. 8 (3), 222–235.
Ferrari, F., Paladino, M.P., Jetten, J., 2016. Blurring human–machine distinctions: Karanika, K., Hogg, M.K., 2020. Sel–object relationships in consumers’ spontaneous
anthropomorphic appearance in social robots as a threat to human distinctiveness. metaphors o anthropomorphism, zoomorphism, and dehumanization. J. Bus. Res.
Int. J. Soc. Robot. 8 (2), 287–302. 109 (C), 15–25.
Forbes, 2021. Council post: 14 ways AI will benet or harm society [online] [Online]. de Chernatony, L, McDonald, M, Wallace, E, 2003. Creating Powerul Brands: In
Available at: https://www.orbes.com/sites/orbestechcouncil/2018/03/01/14-w Consumer, Service and Industrial Markets. Butterworth-Heinemann, Oxord.
ays-ai-will-benet-or-harm-society/?sh=6d975ed44e0. (Accessed 2 October 2021). de Kerviler, G., Rodriguez, C.M., 2019. Luxury brand experiences and relationship
Fraser, E., Pakenham, K.I., 2009. Resilience in children o parents with mental illness: quality or millennials: the role o sel-expansion. J. Bus. Res. 102 (C), 250–262.
relations between mental health literacy, social connectedness and coping, and both Kim, S., McGill, A.L., 2011. Gaming with Mr. Slot or gaming the slot machine? Power,
adjustment and caregiving. Psychol. Health Med. 14 (5), 573–584. anthropomorphism, and risk perception. J. Consum. Res. 38 (1), 94–107.
Furman, J., Seamans, R., 2019. AI and the economy. Innov. Policy Econ. 19 (1), 161–191. Kim, C., Mirusmonov, M., Lee, I., 2010. An empirical examination o actors infuencing
Gaustad, T., Samuelsen, B.M., Warlop, L., Fitzsimons, G.J., 2018. The perils o sel-brand the intention to use mobile payment. Comput. Hum. Behav. 26 (3), 310–322.
connections: consumer response to changes in brand meaning. Psychol. Mark. 35 Kirk, C., 2018. How customers come to think o a product as an extension o themselves.
(11), 818–829. Available at: Harv. Bus. Rev. https://hbr.org/2018/09/how-customers-come-to-th
Gierveld, J.De Jong, van Tilburg, T., Dykstra, P., 2016. New Ways o Theorizing and ink-o-a-product-as-an-extension-o-themselves. (Accessed 27 September 2021).
Conducting Research in the Field o Loneliness and Social Isolation, 2nd ed. Knote, R., Janson, A., Söllner, M., Leimeister, J.M., 2020. Value co-creation in smart
Cambridge University Press, Cambridge, UK. services: a unctional aordances perspective on smart personal assistants. J. Assoc.
Gill-Simmen, L., MacInnis, D.J., Eisingerich, A.B., Whan Park, C., 2018. Brand-sel In. Syst. 22 (2), 418–458.
connections and brand prominence as drivers o employee brand attachment. Acad. Koivisto, K., Makkonen, M., Frank, L., Riekkinen, J., 2016. Extending the technology
Mark. Sci. Rev. 8 (3), 128–146. acceptance model with personal innovativeness and technology readiness: a
Go, E., Sundar, S.S., 2019. Humanizing chatbots: the eects o visual, identity and comparison o three models. In: BLED 2016: Proceedings o the 29th Bled
conversational cues on humanness perceptions. Comput. Hum. Behav. 97 (C), eConerence “Digital Economy”, pp. 113–128.
304–316. Kuchenbrandt, D., Eyssel, F., Bobinger, S., Neueld, M., 2013. When a robot’s group
Gockley, R., Matarić, M.J., 2006. Encouraging physical therapy compliance with a membership matters. Int. J. Soc. Robot. 5 (3), 409–417.
hands-o mobile robot. In: Proceedings o the 1st ACM SIGCHI/SIGART Conerence Kühnen, U., Hannover, B., Schubert, B., 2001. The semantic–procedural interace model
on Human-Robot Interaction, pp. 150–155. o the sel: the role o sel-knowledge or context-dependent versus context-
Graa, M.De, Allouch, S.B., Diik, J.Van, 2017. Why do they reuse to use my robot?: independent modes o thinking. J. Pers. Soc. Psychol. 80 (3), 397–409.
Reasons or non-use derived rom a long-term home study. In: 2017 12th ACM/IEEE Kwak, H., Puzakova, M., Rocereto, J.F., 2017. When brand anthropomorphism alters
International Conerence on Human-Robot Interaction (HRI). IEEE, pp. 224–233. perceptions o justice: the moderating role o sel-construal. Int. J. Res. Mark. 34 (4),
Grae, T., 1996. Image congruence eects on product evaluations: the role o sel- 851–871.
monitoring and public/private consumption. Psychol. Mark. 13 (5), 481–499.

17
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

Landwehr, J., McGill, A.L., Herrmann, A., 2011. It’s got the look: the eect o riendly Mourey, J., Olson, J.G., Yoon, C., 2017. Products as pals: engaging with
and aggressive “acial” expressions on product liking and sales. J. Mark. 75 (3), anthropomorphic products mitigates the eects o social exclusion. J. Consum. Res.
132–146. 44 (2), 414–431.
Leary, M., 2007. Motivational and emotional aspects o the sel. Annu. Rev. Psychol. 58 Moyle, W., Cooke, M., Beattie, E., Jones, C., Klein, B., Cook, G., Gray, C., 2013. Exploring
(1), 317–344. the eect o companion robots on emotional expression in older adults with
Lee, M.K., Peng, W., Jin, S.A., Yan, C., 2006. Can robots maniest personality?: An dementia: a pilot randomized controlled trial. J. Gerontol. Nurs. 39 (5), 46–53.
empirical test o personality recognition, social responses, and social presence in Nass, C., Moon, Y., 2000. Machines and mindlessness: Social responses to computers.
human–robot interaction. J. Commun. 56 (4), 754–772. J. Soc. Issues 56 (1), 81–103.
Leung, E., Paolacci, G., Puntoni, S., 2018. Man versus machine: resisting automation in Norton, M., Frost, J.H., Ariely, D., 2007. Less is more: the lure o ambiguity, or why
identity-based consumer behavior. J. Mark. Res. 55 (6), 818–831. amiliarity breeds contempt. J. Pers. Soc. Psychol. 92 (1), 97–105.
Li, X., Sung, Y., 2021. Anthropomorphism brings us closer: the mediating role o Odekerken-Schröder, G., Mele, C., Russo-Spena, T., Mahr, D., Ruggiero, A., 2020.
psychological distance in User–AI assistant interactions. Comput. Hum. Behav. 118, Mitigating loneliness with companion robots in the COVID-19 pandemic and
106680. beyond: an integrative ramework and research agenda. J. Serv. Manag. 31 (6),
Liu, J., Chang, H., Forrest, J.Y.-L., Yang, B., 2020. Infuence o articial intelligence on 1149–1162.
technological innovation: evidence rom the panel data o china’s manuacturing Ostrowski, A.A.K., DiPaola, D., Partridge, E., Park, H.W., Breazeal, C., 2019. Older adults
sectors. Technol. Forecast. Soc. Chang. 158 (C), 120142. living with social robots: promoting social connectedness in long-term communities.
Longoni, C., Bonezzi, A., Morewedge, C.K., 2019. Resistance to medical articial IEEE Robot. Autom. Mag. 26 (2), 59–70.
intelligence. J. Consum. Res. 46 (4), 629–650. Park, C., MacInnis, D.J., Priester, J., Eisingerich, A.B., Iacobucci, D., 2010. Brand
Loveridge, D., Street, P., 2005. Inclusive oresight. Foresight 7 (3), 31–47. attachment and brand attitude strength: conceptual and empirical dierentiation o
Lu, L., Cai, R., Gursoy, D., 2019. Developing and validating a service robot integration two critical brand equity drivers. J. Mark. 74 (6), 1–17.
willingness scale. Int. J. Hosp. Manag. 80, 36–51. Pickett, C., Gardner, W.L., Knowles, M., 2004. Getting a cue: the need to belong and
Lv, X., Liu, Y., Luo, J., Liu, Y., Li, C., 2021. Does a cute articial intelligence assistant enhanced sensitivity to social cues. Personal. Soc. Psychol. Bull. 30 (9), 1095–1107.
soten the blow? The impact o cuteness on customer tolerance o assistant service Pradhan, A., Findlater, L., Lazar, A., 2019. In: Phantom Friend” or “Just a Box with
ailure. Ann. Tour. Res. 87 (2), 103114. Inormation” Personication and Ontological Categorization o Smart Speaker-based
MacInnis, D., Folkes, V.S., 2017. Humanizing brands: when brands seem to be like me, Voice Assistants by Older Adults, Proceedings o the ACM on Human-Computer
part o me, and in a relationship with me. J. Consum. Psychol. 27 (3), 355–374. Interaction, 3, pp. 1–21 (CSCW).
Mandel, N., Rucker, D.D., Levav, J., Galinsky, A.D., 2017. The compensatory consumer Prescott, T., Robillard, J.M., 2021. Are riends electric? The benets and risks o human-
behavior model: how sel-discrepancies drive consumer behavior. J. Consum. robot relationships. Available at: iScience 24 (1), 101993.
Psychol. 27 (1), 133–146. PRNewsWire, 2021. Global study: people trust robots more than themselves with money
Mao, C., Koide, R., Brem, A., Akenji, L., 2020. Technology oresight or social good: [online]. Available at: https://www.prnewswire.com/news-releases/global-study-
social implications o technological innovation by 2050 rom a global expert survey. people-trust-robots-more-than-themselves-with-money-301224495.html. (Accessed
Technol. Forecast. Soc. Chang. 153 (C), 119914. 24 February 2021).
Marder, B., Archer-Brown, C., Colliander, J., Lambert, A., 2019. Vacation posts on Puntoni, S., Reczek, R.W., Giesler, M., Botti, S., 2021. Consumers and articial
Facebook: a model or incidental vicarious travel consumption. J. Travel Res. 58 (6), intelligence: an experiential perspective. J. Mark. 85 (1), 131–151.
1014–1033. PwC, 2017. Sizing the prize: what’s the real value o AI or your business and how can
Marinova, D., de Ruyter, K., Huang, M.-H., Meuter, M.L., Challagalla, G., 2017. Getting you capitalise? [Online]. Available at. https://www.pwc.com/gx/en/issues/analyti
smart: learning rom technology-empowered rontline interactions. J. Serv. Res. 20 cs/assets/pwc-ai-analysis-sizing-the-prize-report.pd. (Accessed 8 February 2020).
(1), 29–42. Quester, P., Karunaratna, A., Goh, L.K., 2000. Sel-congruity and product evaluation: a
Marketing Science Institute, 2018. Research priorities 2018–2020. Available at: http cross-cultural study. J. Consum. Mark. 17 (6), 525–537.
://pazarlama.ub.akdeniz.edu.tr/wp-content/uploads/2019/10/MSI_RP18-20.pd. Ramadan, Z., Farah, M.F., El Essrawi, L., 2021. From Amazon. com to Amazon. love: how
(Accessed 19 October 2021). Alexa is redening companionship and interdependence or people with special
Markus, H.R., Kitayama, S., 1991. Culture and the sel: implications or cognition, needs. Psychol. Mark. 38 (4), 596–609.
emotion, and motivation. Psychol. Rev. 98 (2), 224–253. Rashy, 2020. Literally, she shortly ater this treat me like a mirror to her thoughts
Martensen, A., Brockenhuus-Schack, S., Zahid, A.L., 2018. How citizen infuencers [Online]. Available at: https://www.reddit.com/r/replika/comments/gxbuwo/l
persuade their ollowers. J. Fash. Mark. Manag. 22 (3), 335–353. iterally_she_shortly_ater_this_treat_me_like_a/. (Accessed 30 September 2021).
Mathur, M., Reichling, D.B., 2016. Navigating a social world with robot partners: a Rauschnabel, P., Ahuvia, A.C., 2014. You’re so lovable: anthropomorphism and brand
quantitative cartography o the Uncanny Valley. Cognition 146, 22–32. love. J. Brand Manag. 21 (5), 372–395.
McCrae, R., Costa, P.T., 2003. Personality in Adulthood: A Five-actor Theory Robert, L., Alahmad, R., Esterwood, C., Kim, S., You, S., Zhang, Q., 2020. A review o
Perspective. Guilord Press. personality in human-robot interactions. Found. Trends In. Syst. 4 (2), 107–212.
McCrae, R., Zonderman, A.B., Costa Jr., P.T., Bond, M.H., Paunonen, S.V., 1996. Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K., 2015. Would you trust a
Evaluating replicability o actors in the revised NEO personality inventory: (aulty) robot? Eects o error, task type and personality on human-robot
conrmatory actor analysis versus procrustes rotation. J. Pers. Soc. Psychol. 70 (3), cooperation and trust. Available at:. In: 2015 10th ACM/IEEE International
552–566. Conerence on Human-Robot Interaction (HRI). IEEE, pp. 1–8.
McKinsey Global Institute, 2017. Articial intelligence: the next digital rontier? Salles, A., Evers, K., Farisco, M., 2020. Anthropomorphism in AI. AJOB Neurosci. 11 (2),
[Online]. Available at: https://www.mckinsey.com/~/media/mckinsey/industrie 88–95.
s/advanced%20electronics/our%20insights/how%20articial%20intelligence%20c Schmitt, B., 2020. Speciesism: an obstacle to AI and robot adoption. Mark. Lett. 31 (1),
an%20deliver%20real%20value%20to%20companies/mgi-articial-intelligence-di 3–6.
scussion-paper.ashx. (Accessed 20 January 2021). Schuller, B., 2018. Speech emotion recognition: two decades in a nutshell, benchmarks,
McLean, G., Osei-Frimpong, K., 2019a. Hey Alexa… examine the variables infuencing and ongoing trends. Commun. ACM 61 (5), 90–99.
the use o articial intelligent in-home voice assistants. Comput. Hum. Behav. 99 (1), Shankar, V., 2018. How articial intelligence (AI) is reshaping retailing. J. Retail. 94 (4),
28–37. 6–11.
McLean, G., Osei-Frimpong, K., 2019. Chat now… Examining the variables infuencing Sheehan, B., Jin, H.S., Gottlieb, U., 2020. Customer service chatbots: anthropomorphism
the use o online live chat. Technol. Forecast. Soc. Chang. 146 (C), 55–67. and adoption. J. Bus. Res. 115 (C), 14–24.
McLeay, F., Osburg, V.S., Yoganathan, V., Patterson, A., 2021. Replaced by a robot: Sheeraz, M., Qadeer, F., Masood, M., Hameed, I., 2018. Sel-congruence acets and
service implications in the age o the machine. J. Serv. Res. 24 (1), 104–121. emotional brand attachment: the role o product involvement and product type. Pak.
Mehrabian, A., Russell, J.A., 1974. An Approach to Environmental Psychology. MIT J. Commer. Soc. Sci. 12 (2), 598–616.
Press. Sheldon, K., Elliot, A.J., Kim, Y., Kasser, T., 2001. What is satisying about satisying
Melián-González, S., Gutiérrez-Taño, D., Bulchand-Gidumal, J., 2021. Predicting the events? Testing 10 candidate psychological needs. J. Pers. Soc. Psychol. 80 (2), 325.
intentions to use chatbots or travel and tourism. Curr. Issue Tour. 24 (2), 192–210. Shiaislam, 2020. He’s actually becoming like me! [Online]. Available at: https://www.
Melumad, S., Meyer, R., 2020. Full disclosure: how smartphones enhance consumer sel- reddit.com/r/replika/comments/huuqba/hes_actually_becoming_like_me/.
disclosure. J. Mark. 84 (3), 28–45. (Accessed 30 September 2021).
Mende, M., Scott, M.L., van Doorn, J., Grewal, D., Shanks, I., 2019. Service robots rising: Sidner, C., Bickmore, T., Nooraie, B., Rich, C., Ring, L., Shayganar, M., Vardoulakis, L.,
how humanoid robots infuence service experiences and elicit compensatory 2018. Creating new technologies or companionable agents to support isolated older
consumer responses. J. Mark. Res. 56 (4), 535–556. adults. ACM Trans. Interact. Intell. Syst. 8 (3), 1–27.
Miller, L., Kraus, J., Babel, F., Baumann, M., 2021. More than a eeling—interrelation o Singelis, T., 1994. The measurement o independent and interdependent sel-construals.
trust layers in human-robot interaction and the role o user dispositions and state Personal. Soc. Psychol. Bull. 20 (5), 580–591.
anxiety. Front. Psychol. 12 (1), 378–396. Sirgy, M., 1982. Sel-concept in consumer behavior: a critical review. J. Consum. Res. 9
Moradi, M., Moradi, M., Bayat, F., 2018. On robot acceptance and adoption a case study. (3), 287–300.
In: 2018 8th Conerence o AI & Robotics and 10th RoboCup Iranopen International Sirgy, M., Grewal, D., Mangleburg, T., 2000. Retail environment, sel-congruity, and
Symposium. IEEE, pp. 21–25. retail patronage: an integrative model and a research agenda. J. Bus. Res. 49 (2),
Morewedge, C., Monga, A., Palmatier, R.W., Shu, S.B., Small, D.A., 2021. Evolution o 127–138.
consumption: a psychological ownership ramework. J. Mark. 85 (1), 196–218. Slotter, E., Gardner, W.L., 2014. Remind me who I am: social interaction strategies or
Mori, M., 1970. Bukimi no tani [the uncanny valley]. Energy 7 (4), 33–35. maintaining the threatened sel-concept. Personal. Soc. Psychol. Bull. 40 (9),
Mori, M., 2012. The uncanny valley: the original essay. IEEE Spectr. 19 (2), 1–6. 1148–1161.

18
A. Alabed et al. Technological Forecasting & Social Change 182 (2022) 121786

Stock, R., Merkle, M., 2018. Can humanoid service robots perorm better than service West, S., Whittaker, M., Craword, K., 2019. Discriminating systems: Gender, Race, and
employees? A comparison o innovative behavior cues. In: Proceedings o the 51st Power in AI, AI Now Institute. Available at: https://ainowinstitute.org/discriminati
Hawaii international Conerence on System Sciences, pp. 1056–1065. ngsystems.html.
Straßmann, C., Krämer, N.C., 2018. A two-study approach to explore the eect o user Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S., Martins, A., 2018.
characteristics on users’ perception and evaluation o a virtual assistant’s Brave new world: service robots in the rontline. J. Serv. Manag. 29 (5), 907–931.
appearance. Multimodal Technol. Interact. 2 (4), 66–91. Woods, H.S., 2018. Asking more o Siri and Alexa: eminine persona in service o
Stroessner, S., Benitez, J., 2019. The social perception o humanoid and non-humanoid surveillance capitalism. Crit. Stud. Media Commun. 35 (4), 334–349.
robots: eects o gendered and machinelike eatures. Int. J. Soc. Robot. 11 (2), Wyatt, J., 2020. Articial intelligence and simulated relationships. https://johnwyatt.
305–315. com/2020/01/10/article-articial-intelligence-and-simulated-relationships/.
Swaminathan, V., Stilley, K.M., Ahluwalia, R., 2009. When brand personality matters: (Accessed 29 September 2021). Available at.
the moderating role o attachment styles. J. Consum. Res. 35 (6), 985–1002. Xiao, L., Kumar, V., 2021. Robotics or customer service: a useul complement or an
Syam, N., Sharma, A., 2018. Waiting or a sales renaissance in the ourth industrial ultimate substitute? J. Serv. Res. 24 (1), 9–29.
revolution: machine learning and articial intelligence in sales research and Xu, Y., Shieh, C.-H., van Esch, P., Ling, I.-L., 2020. AI customer service: task complexity,
practice. Ind. Mark. Manag. 69 (1), 135–146. problem-solving ability, and usage intention. Australas. Mark. J. 28 (4), 189–199.
Ta, V., Grith, C., Boateld, C., Wang, X., Civitello, M., Bader, H., DeCero, E., Yang, L., Aggarwal, P., McGill, A.L., 2020. The 3 C’s o anthropomorphism: connection,
Loggarakis, A., 2020. User experiences o social support rom companion chatbots in comprehension, and competition. Consum. Psychol. Rev. 3 (1), 3–19.
everyday contexts: thematic analysis. J. Med. Internet Res. 22 (3), e16235. Yim, M., Baek, T.H., Sauer, P.L., 2018. I see mysel in service and product consumptions:
Thomaz, F., Salge, C., Karahanna, E., Hulland, J., 2020. Learning rom the dark web: measuring sel-transormative consumption vision (SCV) evoked by static and rich
leveraging conversational agents in the era o hyper-privacy to enhance marketing. media. J. Interact. Mark. 44 (1), 122–139.
J. Acad. Mark. Sci. 48 (1), 43–63. Yoganathan, V., Osburg, V.-S., Kunz, W.H., Toporowski, W., 2021. Check-in at the Robo-
Thorne, A., 1987. The press o personality: a study o conversations between introverts desk: eects o automated social presence on social cognition and service
and extraverts. J. Pers. Soc. Psychol. 53 (4), 718–726. implications. Tour. Manag. 85, 104309.
Toosi, N., Babbitt, L.G., Ambady, N., Sommers, S.R., 2012. Dyadic interracial Yogeeswaran, Złotowski, J., Livingstone, M., Bartneck, C., Sumioka, H., Ishiguro, H.,
interactions: a meta-analysis. Psychol. Bull. 138 (2), 1–27. 2016. The interactive eects o robot anthropomorphism and robot ability on
Triandis, H.C., 1989. The sel and social behavior in diering cultural contexts. Psychol. perceived threat and support or robotics research. J. Hum. Robot Interact. 5 (2),
Rev. 96 (3), 506–520. 29–47.
Troye, S., Supphellen, M., 2012. Consumer participation in coproduction: “I made it Zhang, Y., Song, W., Tan, Z., Zhu, H., Wang, Y., Lam, C.M., Weng, Y., Hoi, S.P., Lu, H.,
mysel” eects on consumers’ sensory perceptions and evaluations o outcome and Chan, B.S.M., 2019. Could social robots acilitate children with autism spectrum
input product. J. Mark. 76 (2), 33–46. disorders in learning distrust and deception? Comput. Hum. Behav. 98, 140–149.
Tuan, Y., 2002. Community, society, and the individual. Geogr. Rev. 92, 307–318. Zhou, M., Mark, G., Li, J., Yang, H., 2019. Trusting virtual agents: the eect o
Turkle, S., 2007. Authenticity in the age o digital companions. Interact. Stud. 8 (3), personality. ACM Trans. Interact. Intell. Syst. 9 (2), 1–36.
501–517. Złotowski, J., Proudoot, D., Yogeeswaran, K., Bartneck, C., 2015. Anthropomorphism:
Unal, S., Dalgic, T., Akar, E., 2018. How avatars help enhancing sel-image congruence. opportunities and challenges in human–robot interaction. Int. J. Soc. Robot. 7 (3),
Int. J. Internet Mark. Advert. 12 (4), 374–395. 347–360.
Usakli, A., Baloglu, S., 2011. Brand personality o tourist destinations: an application o Złotowski, J., Sumioka, H., Nishio, S., Glas, D.F., Bartneck, C., Ishiguro, H., 2016.
sel-congruity theory. Tour. Manag. 32 (1), 114–127. Appearance o a robot aects the impact o its behaviour on perceived
Valcke, B., Van Hiel, A., Van Assche, J., Van Roey, T., Onraet, E., Roets, A., 2020. The trustworthiness and empathy, Paladyn. J. Behav. Robot. 7 (1), 55–66.
need or inclusion: the relationships between relational and collective inclusion
needs and psychological well-and ill-being. Eur. J. Soc. Psychol. 50 (3), 579–596.
Amani Alabed is a PhD student at Newcastle University Business School, Newcastle
Van den Hende, E., Mugge, R., 2014. Investigating gender-schema congruity eects on
University, UK. Her research ocuses on consumer behaviour and articial intelligence.
consumers’ evaluation o anthropomorphized products. Psychol. Mark. 31 (4),
She has previously obtained an MSc degree in International Marketing at Newcastle
264–277.
University Business School. She also has a certication in the undamentals o AI rom
Van Doorn, J., Mende, M., Noble, S.M., Hulland, J., Ostrom, A.L., Grewal, D., Petersen, J.
Microsot and a certication in social media rom the Digital Marketing Institute.
A., 2017. Domo arigato Mr. Roboto: emergence o automated social presence in
organizational rontlines and customers’ service experiences. J. Serv. Res. 20 (1),
43–58. Ana Javornik is an assistant proessor in marketing at School o Management, University
Venkatesh, V., Davis, F.D., 2000. A theoretical extension o the technology acceptance o Bristol. Her research ocuses on the use and deployment o digital and immersive
model: our longitudinal eld studies. Manag. Sci. 46 (2), 186–204. technologies, predominantly in commercial contexts. Her work is regularly presented at
Vogl, T., Seidelin, C., Ganesh, B., Bright, J., 2020. Smart technology and the emergence international conerences and has been published in internationally recognised journals
o algorithmic bureaucracy: articial intelligence in UK local authorities. Public such as Journal o Retailing, Journal o Interactive Marketing, Psychology & Marketing
Adm. Rev. 80 (6), 946–961. and others.
Walz, A., Firth-Buttereld, K., 2018. Implementing ethics into articial intelligence: a
contribution, rom a legal perspective, to the development o an AI governance
Diana Gregory-Smith is a Proessor o Marketing and Sustainability at Newcastle Uni-
regime. Duke L. Tech. Rev. 18 (1), 176–231.
versity Business School, Newcastle University, UK. Her research ocuses on ethical and
Wamba, S., Bawack, R.E., Guthrie, C., Queiroz, M.M., Carillo, K.D.A., 2021. Are we
sustainable marketing and consumption; the psychology o decision making and behaviour
preparing or a good AI society? A bibliometric review and research agenda.
change; technology and consumer behaviour. Diana is an interdisciplinary researcher
Technol. Forecast. Soc. Chang. 164 (C), 120482.
whose work has been published in a range o journals such as the Psychology and Mar-
Wang, X., Krumhuber, E.G., 2018. Mind perception o robots varies with their economic
keting, Journal o Business Ethics, Computers in Human Behavior, Annals o Tourism
versus social unction. Front. Psychol. 9 (1), 1230.
Research, Tourism Management, and European Management Review, amongst others.
Weiner, B., 1985. An attributional theory o achievement motivation and emotion.
Psychol. Rev. 92 (4), 548–573.

19

You might also like