Sam
Altman,
CEO
of
OpenAI,
speaks
at
the
Hope
Global
Forums
annual
meeting
in
Atlanta
on
Dec.
11,
2023.

Dustin
Chambers
|
Bloomberg
|
Getty
Images

OpenAI
has
disbanded
its
team
focused
on
the
long-term
risks
of

artificial
intelligence

just
one
year
after
the
company
announced
the
group,
a
person
familiar
with
the
situation
confirmed
to
CNBC
on
Friday.

The
person,
who
spoke
on
condition
of
anonymity,
said
some
of
the
team
members
are
being
reassigned
to
multiple
other
teams
within
the
company.

The
news
comes
days
after
both
team
leaders,
OpenAI
co-founder
Ilya
Sutskever
and
Jan
Leike,

announced
their
departures

from
the Microsoft-backed
startup.
Leike
on
Friday
wrote
that
OpenAI’s
“safety
culture
and
processes
have
taken
a
backseat
to
shiny
products.”

OpenAI’s
Superalignment
team, announced last
year,
has
focused
on
“scientific
and
technical
breakthroughs
to
steer
and
control
AI
systems
much
smarter
than
us.”
At
the
time,
OpenAI
said
it
would
commit
20%
of
its
computing
power
to
the
initiative
over
four
years.

OpenAI
did
not
provide
a
comment
and
instead
directed
CNBC
to
co-founder
and
CEO
Sam
Altman’s

recent
post

on
X,
where
he
shared
that
he
was
sad
to
see
Leike
leave
and
that
the
company
had
more
work
to
do.

News
of
the
team’s
dissolution
was
first
reported
by

Wired
.

Sutskever
and
Leike
on
Tuesday
announced
their
departures on
social
media
platform
X
,
hours
apart,
but
on
Friday,
Leike
shared
more
details
about
why
he
left
the
startup.

“I
joined
because
I
thought
OpenAI
would
be
the
best
place
in
the
world
to
do
this
research,”
Leike

wrote
on
X
.
“However,
I
have
been
disagreeing
with
OpenAI
leadership
about
the
company’s
core
priorities
for
quite
some
time,
until
we
finally
reached
a
breaking
point.”

Leike
wrote
that
he
believes
much
more
of
the
company’s
bandwidth
should
be
focused
on
security,
monitoring,
preparedness,
safety
and
societal
impact.

“These
problems
are
quite
hard
to
get
right,
and
I
am
concerned
we
aren’t
on
a
trajectory
to
get
there,”
he
wrote.
“Over
the
past
few
months
my
team
has
been
sailing
against
the
wind.
Sometimes
we
were
struggling
for
[computing
resources]
and
it
was
getting
harder
and
harder
to
get
this
crucial
research
done.”

Leike
added
that
OpenAI
must
become
a
“safety-first
AGI
company.”

“Building
smarter-than-human
machines
is
an
inherently
dangerous
endeavor,”
he
wrote.
“OpenAI
is
shouldering
an
enormous
responsibility
on
behalf
of
all
of
humanity.
But
over
the
past
years,
safety
culture
and
processes
have
taken
a
backseat
to
shiny
products.”

Leike
did
not
immediately
respond
to
a
request
for
comment.

The
high-profile
departures
come
months
after
OpenAI
went
through
a

leadership
crisis

involving
Altman.

In
November,
OpenAI’s
board
ousted
Altman,
saying
in
a
statement
that
Altman
had
not
been
“consistently
candid
in
his
communications
with
the
board.”

The
issue
seemed
to
grow
more
complex
each
day,
with The
Wall
Street
Journal
 and
other
media
outlets
reporting
that
Sutskever
trained
his
focus
on
ensuring
that
artificial
intelligence
would
not
harm
humans,
while
others,
including
Altman,
were
instead
more
eager
to
push
ahead
with
delivering
new
technology.

Altman’s
ouster
prompted
resignations
or
threats
of
resignations,
including
an
open
letter
signed
by
virtually
all
of
OpenAI’s
employees,
and
uproar
from
investors,
including


Microsoft
.
Within
a
week,
Altman
was
back
at
the
company,
and
board
members
Helen
Toner,
Tasha
McCauley
and
Ilya
Sutskever,
who
had
voted
to
oust
Altman,
were
out.
Sutskever
stayed
on
staff
at
the
time
but
no
longer
in
his
capacity
as
a
board
member.
Adam
D’Angelo,
who
had
also
voted
to
oust
Altman,
remained
on
the
board.

When
Altman
was
asked
about
Sutskever’s
status
on
a
Zoom
call
with
reporters
in
March,
he
said
there
were
no
updates
to
share.
“I
love
Ilya

I
hope
we
work
together
for
the
rest
of
our
careers,
my
career,
whatever,”
Altman
said.
“Nothing
to
announce
today.”

On
Tuesday,
Altman
shared
his
thoughts
on
Sutskever’s
departure.

“This
is
very
sad
to
me;
Ilya
is
easily
one
of
the
greatest
minds
of
our
generation,
a
guiding
light
of
our
field,
and
a
dear
friend,”
Altman
wrote on
X
.
“His
brilliance
and
vision
are
well
known;
his
warmth
and
compassion
are
less
well
known
but
no
less
important.”
Altman
said
research
director
Jakub
Pachocki,
who
has
been
at
OpenAI
since
2017,
will
replace
Sutskever
as
chief
scientist.

News
of
Sutskever’s
and
Leike’s
departures,
and
the
dissolution
of
the
superalignment
team,
come
days
after
OpenAI
launched
new
AI
model
 and
desktop
version
of
ChatGPT,
along
with
an
updated
user
interface,
the
company’s
latest
effort
to
expand
the
use
of
its
popular
chatbot.

The
update
brings
the
GPT-4
model
to
everyone,
including
OpenAI’s
free
users,
technology
chief
Mira
Murati
said
Monday
in
a
livestreamed
event.
She
added
that
the
new
model,
GPT-4o,
is
“much
faster,”
with
improved
capabilities
in
text,
video
and
audio.

OpenAI
said
it
eventually
plans
to
allow
users
to
video
chat
with
ChatGPT.
“This
is
the
first
time
that
we
are
really
making
a
huge
step
forward
when
it
comes
to
the
ease
of
use,”
Murati
said.