3D
generated
face
representing
artificial
intelligence
technology

Themotioncloud
|
Istock
|
Getty
Images

A
growing
wave
of
deepfake
scams has
looted
millions
of
dollars
from
companies
worldwide,
and
cybersecurity
experts
warn
it
could
get
worse
as
criminals
exploit
generative
AI
for
fraud.

A
deep
fake
is
a
video,
sound,
or
image
of
a
real
person
that
has
been
digitally altered
and manipulated, often through
artificial
intelligence, to
convincingly
misrepresent
them.

In
one
of
the
largest
known
case
this
year,
a
Hong
Kong
finance
worker
was
duped
into
transferring
more
than
$25
million
to
fraudsters
using
deepfake technology
who
disguised
themselves
as
colleagues
on
a
video
call, authorities
told
local
media
in
February.
   

Last
week,
UK
engineering
firm
Arup
confirmed
to
CNBC
that
it
was
the
company
involved
in
that
case,
but
it
could
not
go
into
details on
the
matter due
to
the
ongoing
investigation. 

Such
threats
have
been
growing
as
a
result
of
the
popularization
of
Open
AI’s
Chat
GPT

launched
in
2022

which
quickly
shot
generative
AI
technology
into
the
mainstream,
said
David
Fairman,
chief
information
and
security
officer
at
cybersecurity
company
Netskope.

“The
public
accessibility
of
these
services
has
lowered
the
barrier
of
entry
for
cyber
criminals

they
no
longer
need
to
have
special
technological
skill
sets,”
Fairman
said.

The
volume
and
sophistication
of
the
scams
have
expanded
as
AI
technology
continues
to
evolve,
he
added.


Rising
trend 

Various
generative
AI
services
can be
used
to generate
human-like
text,
image
and
video
content, and
thus can
act
as
powerful
tools
for
illicit
actors
trying
to
digitally
manipulate
and
recreate
certain
individuals. 

A
spokesperson
from
Arup
told
CNBC:
“Like
many
other
businesses
around
the
globe,
our
operations
are
subject
to
regular
attacks,
including
invoice
fraud,
phishing
scams,
WhatsApp
voice
spoofing,
and
deepfakes.”

The
finance
worker
had
reportedly
attended
the
video
call
with
people
believed
to
be
the
company’s
chief
financial
officer
and
other staff
members,
who
requested
he
make
a
money
transfer.
However,
the
rest
of
the
attendees
present
in
that
meeting
had,
in
reality,
been
digitally
recreated
deepfakes. 

Arup
confirmed
that
“fake
voices
and
images”
were
used
in
the
incident,
adding
that
“the
number
and
sophistication
of
these
attacks
has
been
rising
sharply
in
recent
months.” 


Chinese
state
media reported
a
similar
case

in
Shanxi
province
this
year involving
a
female
financial
employee,
who
was
tricked
into
transferring
1.86
million
yuan
($262,000)
to
a
fraudster’s
account
after
a
video
call
with
a
deepfake
of
her
boss. 

Sen. Marsha Blackburn talks bill targeting AI deepfakes


watch
now


Broader
implications 

In
addition
to
direct
attacks,
companies
are
increasingly
worried about
other
ways
deepfake
photos,
videos or
speeches
of
their
higher-ups
could
be
used
in
malicious
ways,
cybersecurity
experts
say.

According
to
Jason
Hogg,
cybersecurity
expert
and
executive-in-residence
at
Great
Hill
Partners,
deepfakes
of
high-ranking
company
members
can
be
used
to
spread
fake
news
to
manipulate
stock
prices,
defame
a
company’s
brand
and
sales,
and
spread
other
harmful
disinformation. 

“That’s
just
scratching
the
surface,”
said
Hogg,
who
formerly
served
as
an FBI
Special
Agent. 

He
highlighted
that
generative
AI
is
able
to
create
deepfakes based
on
a
trove
of
digital
information
such
as
publicly
available
content
hosted
on
social
media
and
other
media
platforms. 

In
2022,
Patrick
Hillmann,
chief
communications
officer
at
Binance,
claimed
in
blog
post
that
 scammers
had
made
a
deepfake
of
him
based
on
previous
news
interviews
and
TV
appearances,
using
it
to
trick
customers
and
contacts
into
meetings.

AI & deepfakes represent 'a new type of information security problem', says Drexel's Matthew Stamm


watch
now

Netskope’s
Fairman
said
such
risks
had
led
some
executives
to
begin
wiping
out
or
limiting
their
online
presence
out
of
fear
that
it
could
be
used
as
ammunition
by
cybercriminals. 

Deepfake
technology
has
already
become
widespread
outside
the
corporate
world.

From

fake
pornographic
images

to
manipulated
videos

promoting
cookware,
celebrities
like
Taylor
Swift

have
fallen
victim
to
deepfake
technology.

Deepfakes
of
politicians

have
also
been
rampant.

Meanwhile,
some
scammers
have

made
deepfakes
of
individuals’
family
members
and
friends

in
attempts
to
fool
them
out
of
money.

According
to
Hogg,
the
broader
issues
will
accelerate
and
get
worse for a
period
of
time as
cybercrime
prevention requires
thoughtful
analysis
in
order
to
develop
systems,
practices,
and
controls
to
defend
against
new
technologies. 

However,
the
cybersecurity
experts
told
CNBC
that
firms
can
bolster
defenses
to
AI-powered
threats
through
improved
staff
education,
cybersecurity
testing,
and
requiring
code
words
and
multiple
layers
of
approvals
for
all
transactions

something
that
could
have
prevented
cases
such
as
Arup’s.