How to prevail when technology fails - Flipbook - Page 20
20 | How to prevail when technology fails
Non-tech companies can
tackle misinformation
Three steps to address the
growing ethical challenges
Just 9% of businesses identify the spread of
misinformation as an important ethical issue to
address when investing in technology, which makes
it the least frequently considered ethical challenge.
This may be because only companies in the media
and technology sectors feel directly impacted by and
responsible for misinformation.
The financial, reputational and increasingly
litigation-related costs of not addressing the ethics
of technology mean that this should be a top priority
for management. Based on our experience advising
clients, here are three simple steps that you can take
to address ethical issues.
Companies like camera company Snap Inc. kept
misinformation front of mind as it developed
its multimedia messaging platform, Snapchat.
“Fighting the spread of misinformation is important
to us,” says Dominic Perella, Snap’s Deputy General
Counsel and Chief Compliance Officer. “Our
platform design doesn’t allow misinformation
to spread because much of the interaction on
our platform is on a one-to-one or small group
communication basis, and because of the way we
designed our content platform. You can’t forward
things – there’s no virality.”
Although companies outside the technology and
media industry are not directly responsible for
the spread of misinformation, they can take steps
to halt it. In June 2020, for example, a number of
well-known brands paused advertising on all social
media platforms because of concerns that they were
propagating misinformation and hate speech.
1
Establish ethical principles
that govern technology use
When investing in technology that raises ethical
challenges, it is imperative to establish and
publish principles that govern how it will be used.
This increases customers’, employees’, and other
stakeholders’ trust that innovative technology
will be deployed within a clear framework. This is
what pharmaceutical company Novartis is doing in
relation to AI.
“The company is currently putting together our
position on the ethical use of AI, and will likely
publish it internally and externally,” says Matthew
Owens. “It reinforces how committed we are to
being transparent about how we use the technology,
how we are limiting or mitigating bias, and how
we are building in safety, security and privacy
by design.”