Visitors
walk
past
a
stand
with
AI
(artificial
intelligence)
security
cameras
using
facial
recognition
technology
at
the
14th
China
International
Exhibition
on
Public
Safety
and
Security
at
the
China
International
Exhibition
Center
in
Beijing
in
2018.
Nicolas
Asfouri
|
Afp
|
Getty
Images
The
Biden
administration
is
poised
to
open
up
a
new
front
in
its
effort
to
safeguard
U.S.
AI
from
China
with
preliminary
plans
to
place
guardrails
around
the
most
advanced
AI
Models,
the
core
software
of
artificial
intelligence
systems
like
ChatGPT,
sources
said.
The
Commerce
Department
is
considering
a
new
regulatory
push
to
restrict
the
export
of
proprietary
or
closed
source
AI
models,
whose
software
and
the
data
it
is
trained
on
are
kept
under
wraps,
three
people
familiar
with
the
matter
said.
Any
action
would
complement
a
series
of
measures
put
in
place
over
the
last
two
years
to
block
the
export
of
sophisticated
AI
chips
to
China
in
an
effort
to
slow
Beijing’s
development
of
the
cutting
edge
technology
for
military
purposes.
Even
so,
it
will
be
hard
for
regulators
to
keep
pace
with
the
industry’s
fast-moving
developments.
The
Commerce
Department
declined
to
comment.
The
Chinese
Embassy
in
Washington
did
not
immediately
respond
to
a
request
for
comment.
Currently,
nothing
is
stopping
U.S.
AI
giants
like
Microsoft-backed
OpenAI,
Alphabet’s
Google
DeepMind
and
rival
Anthropic,
which
have
developed
some
of
the
most
powerful
closed
source
AI
models,
from
selling
them
to
almost
anyone
in
the
world
without
government
oversight.
Government
and
private
sector
researchers
worry
U.S.
adversaries
could
use
the
models,
which
mine
vast
amounts
of
text
and
images
to
summarize
information
and
generate
content,
to
wage
aggressive
cyber
attacks
or
even
create
potent
biological
weapons.
To
develop
an
export
control
on
AI
models,
the
sources
said
the
U.S.
may
turn
to
a
threshold
contained
in
an
AI
executive
order
issued
last
October
that
is
based
on
the
amount
of
computing
power
it
takes
to
train
a
model.
When
that
level
is
reached,
a
developer
must
report
its
AI
model
development
plans
and
provide
test
results
to
the
Commerce
Department.
That
computing
power
threshold
could
become
the
basis
for
determining
what
AI
models
would
be
subject
to
export
restrictions,
according
to
two
U.S.
officials
and
another
source
briefed
on
the
discussions.
They
declined
to
be
named
because
details
have
not
been
made
public.
If
used,
it
would
likely
only
restrict
the
export
of
models
that
have
yet
to
be
released,
since
none
are
thought
to
have
reached
the
threshold
yet,
though
Google’s
Gemini
Ultra
is
seen
as
being
close,
according
to
EpochAI,
a
research
institute
tracking
AI
trends.
The
agency
is
far
from
finalizing
a
rule
proposal,
the
sources
stressed.
But
the
fact
that
such
a
move
is
under
consideration
shows
the
U.S.
government
is
seeking
to
close
gaps
in
its
effort
to
thwart
Beijing’s
AI
ambitions,
despite
serious
challenges
to
imposing
a
muscular
regulatory
regime
on
the
fast-evolving
technology.
As
the
Biden
administration
looks
at
competition
with
China
and
the
dangers
of
sophisticated
AI,
AI
models
“are
obviously
one
of
the
tools,
one
of
the
potential
choke
points
that
you
need
to
think
about
here,”
said
Peter
Harrell,
a
former
National
Security
Council
official.
“Whether
you
can,
in
fact,
practically
speaking,
turn
it
into
an
export-controllable
chokepoint
remains
to
be
seen,”
he
added.
Bioweapons
and
Cyber
Attacks?
The
American
intelligence
community,
think
tanks
and
academics
are
increasingly
concerned
about
risks
posed
by
foreign
bad
actors
gaining
access
to
advanced
AI
capabilities.
Researchers
at
Gryphon
Scientific
and
Rand
Corporation
noted
that
advanced
AI
models
can
provide
information
that
could
help
create
biological
weapons.
The
Department
of
Homeland
Security
said
cyber
actors
would
likely
use
AI
to
“develop
new
tools”
to
“enable
larger-scale,
faster,
efficient,
and
more
evasive
cyber
attacks”
in
its
2024
homeland
threat
assessment.
Any
new
export
rules
could
also
target
other
countries,
one
of
the
sources
said.
“The
potential
explosion
for
[AI’s]
use
and
exploitation
is
radical
and
we’re
having
actually
a
very
hard
time
kind
of
following
that,”
Brian
Holmes,
an
official
at
the
Office
of
the
Director
of
National
Intelligence,
said
an
export
control
gathering
in
March,
flagging
China’s
advancement
as
a
particular
concern.
AI
Crackdown
To
address
these
concerns,
the
U.S.
has
taken
measures
to
stem
the
flow
of
American
AI
chips
and
the
tools
to
make
them
to
China.
It
also
proposed
a
rule
to
require
U.S.
cloud
companies
to
tell
the
government
when
foreign
customers
use
their
services
to
train
powerful
AI
models
that
could
be
used
for
cyber
attacks.
But
so
far
it
hasn’t
addressed
the
AI
models
themselves.
Alan
Estevez,
who
oversees
U.S.
export
policy
at
the
Department
of
Commerce,
said
in
December
that
the
agency
was
looking
at
options
for
regulating
open
source
large
language
model
(LLM)
exports
before
seeking
industry
feedback.
Tim
Fist,
an
AI
policy
expert
at
Washington
DC
based
think
tank
CNAS,
says
the
threshold
“is
a
good
temporary
measure
until
we
develop
better
methods
of
measuring
the
capabilities
and
risks
of
new
models.”
The
threshold
is
not
set
in
stone.
One
of
the
sources
said
Commerce
might
end
up
with
a
lower
floor,
coupled
with
other
factors,
like
the
type
of
data
or
potential
uses
for
the
AI
model,
such
as
the
ability
to
design
proteins
that
could
be
used
to
make
a
biological
weapon.
Regardless
of
the
threshold,
AI
model
exports
will
be
hard
to
control.
Many
models
are
open
source,
meaning
they
would
remain
beyond
the
purview
of
export
controls
under
consideration.
Even
imposing
controls
on
the
more
advanced
proprietary
models
will
prove
challenging,
as
regulators
will
likely
struggle
to
define
the
right
criteria
to
determine
which
models
should
be
controlled
at
all,
Fist
said,
noting
that
China
is
likely
only
around
two
years
behind
the
United
States
in
developing
its
own
AI
software.