Helen
Toner,
director
of
strategy
and
foundational
research
grants
at
Georgetown’s
Center
for
Security
and
Emerging
Technology,
speaks
onstage
during
Vox
Media’s
2023
Code
Conference
in
Dana
Point,
California,
Sept.
27,
2023.

Jerod
Harris
|
Getty
Images

Former
OpenAI
board
member
Helen
Toner,
who
helped

oust
CEO
Sam
Altman

in
November,
broke
her
silence
when
she
spoke
on
a
podcast
released
Tuesday
about
events
inside
the
company
leading
up
to
Altman’s
firing.

One
example
she
gave:
When
OpenAI
released
ChatGPT
in
November
2022,
the
board
was
not
informed
in
advance
and
found
out
about
it
on
Twitter.
Toner
also
said
Altman
did
not
tell
the
board
he
owned
the
OpenAI
startup
fund.


Altman
was
renamed
CEO

less
than
a
week
after
he
was
fired,
but
Toner’s
comments
give
insight
into
the
decision
for
the
first
time.

“The
board
is
a
nonprofit
board
that
was
set
up
explicitly
for
the
purpose
of
making
sure
that
the
company’s
public
good
mission
was
primary,
was
coming
first

over
profits,
investor
interests,
and
other
things,”
Toner
said
on
The
TED
AI
Show

podcast.

“But
for
years,
Sam
had
made
it
really
difficult
for
the
board
to
actually
do
that
job
by
withholding
information,
misrepresenting
things
that
were
happening
at
the
company,
in
some
cases
outright
lying
to
the
board,”
she
said.

Toner
said
Altman
gave
the
board
“inaccurate
information
about
the
small
number
of
formal
safety
processes
that
the
company
did
have
in
place”
on
multiple
occasions.

“For
any
individual
case,
Sam
could
always
come
up
with
some
kind
of
innocuous-sounding
explanation
of
why
it
wasn’t
a
big
deal,
or
misinterpreted,
or
whatever,”
Toner
said.
“But
the
end
effect
was
that
after
years
of
this
kind
of
thing,
all
four
of
us
who
fired
him
came
to
the
conclusion
that
we
just
couldn’t
believe
things
that
Sam
was
telling
us,
and
that’s
just
a
completely
unworkable
place
to
be
in
as
a
board

especially
a
board
that
is
supposed
to
be
providing
independent
oversight
over
the
company,
not
just
helping
the
CEO
to
raise
more
money.”

Toner
explained
that
the
board
had
worked
to
improve
issues.
She
said
that,
in
October,
a
month
before
the
ousting,
the
board
had
conversations
with
two
executives
who
relayed
experiences
with
Altman
they
weren’t
comfortable
sharing
before,
including
screenshots
and
documentation
of
problematic
interactions
and
mistruths.

“The
two
of
them
suddenly
started
telling
us

how
they
couldn’t
trust
him,
about
the
toxic
atmosphere
he
was
creating,”
Toner
said.
“They
used
the
phrase
‘psychological
abuse,’
telling
us
they
didn’t
think
he
was
the
right
person
to
lead
the
company
to
AGI,
telling
us
they
had
no
belief
that
he
could
or
would
change.”

Artificial
general
intelligence,
or
AGI,
is
a
broad
term
that
refers
to
a
type
of
artificial
intelligence
that
outperforms
human
abilities
on
various
cognitive
tasks. 

An
OpenAI
spokesperson
was
not
immediately
available
to
comment.

Earlier
this
month,

OpenAI
disbanded

its
team
focused
on
the
long-term
risks
of AI a
year
after
the
company
announced
the
group.
The
news
came
days
after
both
team
leaders,
OpenAI
co-founder
Ilya
Sutskever
and
Jan
Leike, announced
their
departures
 from
the Microsoft-backed
startup.
Leike,
who
has
since
announced
he
is

joining
AI
competitor
Anthropic
,
wrote
on
Friday
that
OpenAI’s
“safety
culture
and
processes
have
taken
a
backseat
to
shiny
products.”

Toner’s
comments
and
the
high-profile
departures
follow
last
year’s leadership
crisis
.

In
November,

OpenAI’s
board
ousted
Altman
,
saying
it
had
conducted
“a
deliberative
review
process”
and
that
Altman
“was
not
consistently
candid
in
his
communications
with
the
board,
hindering
its
ability
to
exercise
its
responsibilities.”

“The
board
no
longer
has
confidence
in
his
ability
to
continue
leading
OpenAI,”
it
said.


The
Wall
Street
Journal
 and
other
media
outlets
reported
that
while
Sutskever
trained
his
focus
on
ensuring
that
artificial
intelligence
would
not
harm
humans,
others,
including
Altman,
were
instead
more
eager
to
push
ahead
with
delivering
new
technology.

Altman’s
removal
prompted
resignations
and
threats
of
resignations,
including
an
open
letter
signed
by
virtually
all
of
OpenAI’s
employees,
and
uproar
from
investors,
including Microsoft.
Within
a
week,
Altman
was
back
and
board
members
Toner
and
Tasha
McCauley,
who
had
voted
to
oust
Altman,
were
out.
Sutskever
relinquished
his
seat
on
the
board
and
remained
on
staff
until
he

announced
his
departure

on
May
14.
Adam
D’Angelo,
who
had
also
voted
to
oust
Altman,
remains
on
the
board.

In
March,

OpenAI
announced
its
new
board
,
which
includes
Altman,
and
the
conclusion
of
an
internal
investigation
by
law
firm
WilmerHale
into
the
events
leading
up
to
Altman’s
ouster.

OpenAI
did
not
publish
the
WilmerHale
investigation
report
but
summarized
its
findings.

“The
review
concluded
there
was
a
significant
breakdown
of
trust
between
the
prior
board
and
Sam
and
Greg,”
OpenAI
board
chair
Bret
Taylor
said
at
the
time,
referring
to
president
and
co-founder
Greg
Brockman.
The
review
also
“concluded
the
board
acted
in
good
faith

[and]
did
not
anticipate
some
of
the
instability
that
led
afterwards,”
Taylor
added.