Online Intermediary Liability in Malaysia Exploring Artificial Intelligence As Game-Changer' (2022) 3 MLJ Cvi
Online Intermediary Liability in Malaysia Exploring Artificial Intelligence As Game-Changer' (2022) 3 MLJ Cvi
Online Intermediary Liability in Malaysia Exploring Artificial Intelligence As Game-Changer' (2022) 3 MLJ Cvi
//MLJA/2022/Vol3/mlja2022_3_00106
by
Abstract
INTRODUCTION
for hosting illegal third-party content online.1 But one is certain is that it is a
general rule that an intermediary enjoys certain level of immunity against third
party publications. With the complexity of user-generated element added to
increase web interactivity, the legal position on how far content creators who
played passive role akin to intermediaries remains far from clear.
1 Suzi Fadhilah Ismail, Ida Madieha Abdul Ghani Azmi, and Mahyuddin Daud,
‘Transplanting the United States’ Style of Safe Harbour Provisions on Internet Service
Providers Via Multilateral Agreements: Can One Size Fit All?’ (2018) 26(2) IIUM Law
Journal 26 https://journals.iium.edu.my/iiumlj/index.php/iiumlj/article/view/396.
2 The Organisation for Economic Co-operation and Development, ‘The Economic and
Social Role of Internet Intermediaries’ (April 2010) https://www.oecd.org/sti/ieconomy/
44949023.pdf.
3 ARTICLE19, ‘Internet Intermediaries: Dilemma of Liability Q and A’ (Article 19, 29
August 2013) https://www.article19.org.
JOBNAME: No Job Name PAGE: 3 SESS: 1 OUTPUT: Thu Aug 25 01:06:02 2022
//MLJA/2022/Vol3/mlja2022_3_00106
However, in recent years, there has been a trend for content creators to allow
certain section in their page or article that invite visitors to comment. For
online news portals, they report news articles, and provide comments section at
the bottom of the said article. Social media platforms have adopted this trend
whereby massive virtual spaces for users to input their thoughts through
comments, and to add content of their own were provided. For example, users
may upload pictures, videos, music, on top of still-texts. Social media platforms
do not usually pre-review what the users contribute to its pages, due to massive
amount of user-generated content they receive per second, hence will act upon
complaints.4 Nevertheless, they control the algorithms of how content appears
and basically can remove them as necessary. It is based on this premise that they
claim to be ‘intermediaries’ that qualify immunity against any third-party
publication that they host.
Since the definitions of intermediaries set by OECD are mere references for
states, there has been little assistance for the courts to determine when does a
publisher becomes intermediaries. This results in irregular interpretations that
sparked heated debates over freedom of speech and internet censorship —
though this paper wishes not to elaborate more on them at this point. This
article will analyse several cases to illustrate such vague position. The next part
shall review the case of Peguam Negara Malaysia v Mkini Dotcom Sdn Bhd &
Anor5 to focus on the issue of intermediary liability as discussed by the Federal
Court of Malaysia.
The facts of the case were briefly as follows. Mkini Dotcom Sdn Bhd (‘the first
respondent’) and its editor-in-chief (‘the second respondent’) operated an
online news portal known as ‘Malaysiakini’. The portal obtains a huge volume
of online readership globally, and at times were seen to publish news of
sensational nature. On reading a news page, a reader is not required to sign-up
or have a subscription to Malaysiakini. However, netizens who wish to leave
their comments on any news page are required to have an active paid
subscription to the portal. This arguably allows the portal to determine the real
There were several issues which were central to this case. The issues that
arise for consideration are:
(a) have the respondents rebutted the presumption of publication under s
114A of the Evidence Act6 ?
(b) does ‘publication’ require the element of intention and/or knowledge to
be fulfilled? and
(c) did the first and/or second respondents possess the requisite ‘intention
to publish’ for the purposes of scandalising the court contempt?
The first issue concerns the publication of the impugned comments on
MKini’s portal where reference to s 114A of the Evidence Act 1950 was made.
The applicant submitted that the respondent MKini has facilitated publication
of the impugned comments on its portal and thereby a prima facie
presumption of publication should arise under s 114A of the Evidence Act
1950. Accordingly, s 114A imposes a rebuttable presumption of publication
onto anyone whose name appears on the said publication ‘depicting himself as
the owner, host, administrator, editor or sub-editor, or who in any manner’.
The applicant further submitted that an intention to publish the impugned
comments on the part of the respondents was unnecessary to be established.
6 (Act 56).
JOBNAME: No Job Name PAGE: 5 SESS: 1 OUTPUT: Thu Aug 25 01:06:02 2022
//MLJA/2022/Vol3/mlja2022_3_00106
The respondents raised three main points in their defence. Firstly, the
respondents submitted that they should not be responsible for facilitating the
publication of the impugned comments due to lack of knowledge as the
comments were not created by them. On the second point, Mkini argued that
the provisions under the Content Code does not oblige Code subjects to
monitor online activities of netizens, unless being prompted by complaints etc.
The respondents in their third point submitted to have taken additional
measures as follows: (1) by having an in-house terms and condition to warn
subscribers against making illegal and harmful comments; (2) installation of
web filter to automatically filter out any bad language found in comments and
(3) establishment of an online peer reporting system whereby upon receipt of
complaints from any user, the system shall trigger the editor for content
moderation process.
Section 98 of CMA7 is not a new provision, yet since its inception, no case has
ever questioned such provision in court. In this part, the author asks whether it
is mandatory for a content creator to be a registered Code subject? If the answer
is in the negative, then whether compliance with the Content Code is
mandatory to an extent that it can provide any sort of protection to the content
creators.
Despite its founding roots were in the statutory provisions of CMA, the
wordings of s 98 are important to note. In its exact words:
(1) Subject to section 99, compliance with a registered voluntary industry code
shall not be mandatory.
(2) Compliance with a registered voluntary industry code shall be a defence
against any prosecution, action or proceeding of any nature, whether in a
court or otherwise, taken against a person (who is subject to the voluntary
industry code) regarding a matter dealt with in that code.
7 (Act 588).
8 Malaysian Communications and Multimedia, ‘Content Code 2022’ (May 2022) https://
contentforum.my/wp-content/uploads/2022/06/Content-Code-2022.pdf.
9 ‘Content Forum Members’ (Content Forum, 2021) https://contentforum.my.
JOBNAME: No Job Name PAGE: 7 SESS: 1 OUTPUT: Thu Aug 25 01:06:02 2022
//MLJA/2022/Vol3/mlja2022_3_00106
(b) each person who has submitted their agreement to the Forum that they
will be bound by this Code; and
(c) each person whom the Commission has directed in accordance with
s 99 of the CMA.
A glance through the above groups will indicate that majority of those who
operate in the broadcasting and networking sectors will be required to register
as Code subjects. However, an ordinary content creator or Youtuber has the
option of whether to become a member or not. This is in line with the spirit of
self-regulation as promulgated in s 123 of CMA — that participation in the
regulatory scheme is usually voluntary.10
Given the fact that registration is only mandatory for selected groups of
service providers, usually technical in nature, the next question is to consider
whether by complying to the principles of the Code will accord them any sort
of legal protection?
10 Ian Bartle and Peter Vass, Self-Regulation and the Regulatory State: A Survey of Policy and
Practice (Centre for the Study of Regulated Industries, University of Bath School of
Management 2005).
JOBNAME: No Job Name PAGE: 8 SESS: 1 OUTPUT: Thu Aug 25 01:06:02 2022
//MLJA/2022/Vol3/mlja2022_3_00106
From the approach taken by the Federal Court in assessing whether s 98 can
become a defense to Mkini, one may ponder to what extent a content creator
must go to ensure that the countermeasures taken are in line with the objectives
of the Content Code to qualify for s 98 protection? The Federal Court also
differentiated that the case of Twitter and Facebook are different from Mkini
— as they are clearly mere conduits.11 Mkini was found to have control over
who can post comments on their platform hence was expected to take
additional self-control and countermeasures. In summary, s 98 cannot be read
in silo and for those intending to seek its protection must prove to the court
that all necessary countermeasures to remove prohibited content has been
exhausted. At the same time, content creators must prove that they have
absolutely no control over third party content being fed onto their websites.
Any efforts taken, be it content takedown, moderation or even using artificial
intelligence, must be in harmony with the objectives of the Content Code.
The case of Bunt v Tilley & Others12 has also been carefully analysed by the
Federal Court. In brief, the Claimant, Bunt sued three defendants for libel and
harassment for allegedly defamatory statements made on Internet chatrooms.
Their respective Internet Service Providers were also named as the fourth, fifth
and sixth defendant — namely AOL, BT and Tiscali. It was alleged by the
plaintiff that the ISPs gave their respective consumers with a connection to the
Internet they should be accountable for the posts complained of. The ISP
Defendants moved to have the claim struck out and/or for summary
judgement on it.
why liability should not accrue. So too, if the true position were that the applicants
had been (in the Claimant’s words) responsible for ‘corporate sponsorship and
approval of their illegal activities’.13
Further in Justice Eady’s judgment, ‘it is not always necessary to be aware of the
defamatory content, still less of its legal significance ... for a person to be held
responsible there must be knowing involvement in the process of publication of the
relevant words. It is not enough that a person merely plays a passive instrumental
role in the process’.14 The High Court held that all claims against the ISPs were
struck out as they play no role towards the publication of the impugned
communications. The court held that an ISP that performed no more than a
passive role in facilitating postings on the internet and did not host the relevant
website was not deemed to be a publisher at common law any more than a
telephone company would be liable for defamation over the telephone.
It is on this basis, that the case of Mkini differs from Bunt, whereby the
liability was accrued by Mkini since it knowingly permitted the subscribers to
communicate contemptuous communication and that they had the
opportunity to prevent such publication but failed to take adequate actions.
The 2015 case of Delfi AS v Estonia15 was one that may be equated to the
situation of Mkini. Delfi was also an online news portal, with very wide online
readership, similar to Mkini. It published an article that was ruled by Estonian
courts to be defamatory. Upon appeal to the European Court of Human
Rights, the court agreed and upheld the decisions of the Estonian court. What
was central and relevant in this context is, although there was argument that
Delfi to be given immunity purportedly in line with the global legal position on
intermediary liability, ECtHR opined that Estonian courts’ decision to hold
Delfi liable for defamation was within the requirements of necessary and
expedient under European Convention on Human Rights (ECHR). In this
case, ECtHR also agreed that Delfi was to be treated as a publisher hence
reaffirmed the findings of the Estonian courts that it was liable for the
defamatory comments. In this situation, what transpires is that Delfi was not
treated by the courts as an intermediary — a different position altogether with
that of an ISP.
On the other hand, the decision in Godfrey v Demon Internet Service16 has
taken a different route from Bunt.17 Godfrey sued Demon Internet, an ISP, for
defamatory newsgroup posting made available from D’s newsgroup servers.
Laurence Godfrey, a British lecturer said that an anonymous Internet user
published an indecent and defamatory posting and fraudulently ascribed its
authorship to him. The comment was made on an online public forum
managed by Demon Internet Limited, a UK-based ISP, which did not remove
the posting for more than 20 days until its expiration date on the public forum.
Subsequently, Godfrey initiated a libel case against the ISP, demanding
damages for the claimed defamatory comment. The High Court ruled that the
ISP knew or had cause to know that the impugned statement was defamatory
as the plaintiff had alerted the firm that he was not the genuine author of the
remark. Yet, the Defendants opted not to delete the defamatory message.
Accordingly, the Court held that the ISP cannot rely on a viable defence under
s 1 of the UK Defamation Act. It was held that an ISP was a publisher at
common law of the defamatory comments posted on the site by an unknown
user or the situation is analogous to that of secretary of a golf club which
allowed a defamatory statement to remain on a notice board in Byrne v
Deane.18
The position in Godfrey has caused ISPs in the UK to begin removing
defamatory materials upon receipt of complaints. This arguably could lead
towards chilling of free speech and unwarranted content removal as well as
privatised censorship. If Bunt and Godrey were analysed, one can see that the
role played by intermediaries will determine to what extent their liability
should be — whether active or passive role. The position taken in Godfrey has
been avoided in Bunt and other recent decisions, whereby ISP or telephone
company who plays passive role in communicating electronic messages should
to a certain extent, enjoy a degree of immunity. And similarly, the decision in
Bunt and Delfi have also been accepted in Mkini case, but differed because how
Mkini behaved was not comparable to an ISP in Bunt. Hence in Mkini, it is
submitted that the Federal Court was expecting that an intermediary to play a
far more passive role to avoid liability for publication of illegal content.
Given the development of recent cases analysed above, one can derive to an
early conclusion that the legal framework of intermediary liability is far from
certain. At the same time, the role played by intermediaries is crucial to
determine the extent of liability as far as publication of illegal content is
concerned. As far as the definitions of ‘intermediary’ are concerned, one will
only have to play the role of no more than a passive ‘middle-man’ to fit into the
said framework.
The examples cited above suggests, inter alia, that payment network
systems, ISPs and hosting providers should be called intermediaries, and enjoy
immunity as far as liability for publication of third-party content on their
platforms. In a situation where payment network system provider provides an
online banking service to a client, for example, to wire money from one
account to another, then one can safely say that it merely acts as a passive
intermediary for such service. Nevertheless, when a payment network provider
creates content on its website, then the position shifts to a content creator that
shall be answerable to any content that appears on such website.
Taking into consideration the development in Mkini and Delfi, the line
between an intermediary that enjoys immunity from that who indulge into
risks of third-party publication is far from clear. It all depends on the level of
engagement as far as content creation, editorial and control are concerned. In
the above example of a payment network provider, if at the same time it owns
a social media page that enabled comment features, the editor needs to be
cautious of what liability it is capable to attract, such as defamatory, hatred and
illegal remarks.
Since intermediaries who assume the role of content creators may risk being
labelled as publishers, we ask whether the use of artificial intelligence may be
the way forward to reduce legal risks? Is it possible to employ machine learning
to minimise risk for content creators especially for third party content that were
pushed on their websites?
19 Job Turner, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan 2019).
20 AI Data, ‘What Are Search And Recommendation Systems In ML?’ (Telus International, 1
February 2021) https://www.telusinternational.com/articles/4-ways-machine-learning-
can-enhance-social-media-marketing.
JOBNAME: No Job Name PAGE: 13 SESS: 1 OUTPUT: Thu Aug 25 01:06:02 2022
//MLJA/2022/Vol3/mlja2022_3_00106
An AI that can deploy the method of assessing the opinion of text data is
known as sentiment analysis. To connect social media data with predetermined
sentiment categories such as positive, negative, or neutral, the procedure
employs both natural language processing (NLP) and machine learning. The
system may then train agents to recognise the underlying feelings in fresh texts.
Sentiment analysis may be used in social media and customer service to get
input on a new product, service, or design. Businesses may use sentiment
analysis to see how people feel about their competition or hot subjects in their
sector. Using sentiment analysis, content creators may be able to analyse in
advanced which posts may invite illegal content. Hence AI may be used as an
early warning system and implement automated takedowns once a list of
objectionable words has been detected in comments section.
publication.21
21 Ibid.
JOBNAME: No Job Name PAGE: 15 SESS: 1 OUTPUT: Thu Aug 25 01:06:02 2022
//MLJA/2022/Vol3/mlja2022_3_00106
questions. Due to the fact that machine learning relies on examples to discern
patterns, it can learn to categorise new posts in any language as long as these
articles are correctly tagged with the intended prediction.
With services such as Instagram, Snapchat, and Pinterest, the social web has
become increasingly visual. Posts on these platforms are primarily visual, with
only a few hints in the text’s content. As a result, recognising what was
contained in these posts was previously nearly difficult. Fortunately, this is
another instance where deep learning comes to the rescue. These algorithms are
now capable of recognising logos, faces, and objects in both still and moving
images.
content.24
10. RECOMMENDATIONS
On the same token, for defamatory or sacrilegious content, the court does
not distinguish between publisher, editor and distributor’s liability if there is
knowledge of the culpability of the conduct. An intermediary can potentially
be liable for a third party’s content if there is proof that the intermediary has the
ability to control, choose or edit content. Mkini has proven that with regards to
contemptuous content, the same strict liability stand has been taken by the
court. One major flaw with the approach taken by the court in Mkini is the
inability to distinguish between filtering ‘harsh words’ with content
contemptuous of the court. The court did not venture further to determine
whether the filtering software adopted by Mkini was actually capable of
filtering content critical of the court. In doing so, the court has practically cast
a ruling that all publishers or online platforms must stay away from
commenting on current issues that pertain to the conduct of the
administration of justice as that would be deemed to be of ‘contempt of court’.
With due respect, we argue that such a stand may ignore the fundamental
right to express opinion on topical issues of public interest. In the final analysis,
one must be mindful of the report done by the Transatlantic Working Group
where it was conceded that: ‘AI’ however, is not a simple solution or a single
type of technology’ — that there are various forms of AI and automation used
in content moderation and that the existing content curation focussing on hate
speech, violent extremism and disinformation varies greatly depending on the
technology used. Among the report’s key recommendations is that —
automation in content moderation should not be mandated in law because the
24 Mike Kaput, ‘What Is Artificial Intelligence for Social Media?’ (Marketing AI Institute, 17
January 2022) https://www.marketingaiinstitute.com.
JOBNAME: No Job Name PAGE: 17 SESS: 1 OUTPUT: Thu Aug 25 01:06:02 2022
//MLJA/2022/Vol3/mlja2022_3_00106
state of the art is neither reliable nor effective. Most importantly, the Report
concedes that the context of the message is more important than the words
used and often this is ignored in algorithm-based content moderation. Factors
not taken into the AI system such as history, politics, and cultural context need
to be carefully considered. In this regard, intermediary liability laws should
neither mandate, nor condition liability protection on, the use of filters.25
CONCLUSION
Despite the advances in content moderation and the truism that the system is
not infallible, it remains to be seen whether contents contemptuous of courts
based on its subjectivity could easily be contained through algorithms. Unlike
profanity, obscenity, hate speech, sexually explicit materials that could easily be
‘targeted’ by content moderation, content that criticises a court judgement
whilst the litigation is still ongoing is local-centric and could not easily be
identified, monitored or tracked and managed by the system. In this context,
we have to be mindful of the fact, despite the aggressive use of AI content
moderation by big platform providers, the technology is nowhere near perfect.
Intermediaries that choose to deploy some form of content moderation should
not simply be seen as performing the publisher nor the editor role. The
widespread disinformation, misinformation, toxic speech, profanity and
obscenity warrants the big players to exercise due care by playing a role in
stemming such illegal and harmful content from being disseminated online.