OpenAI
CEO
Sam
Altman
speaks
during
the
Microsoft
Build
conference
at
Microsoft
headquarters
in
Redmond,
Washington,
on
May
21,
2024. 

Jason
Redmond
|
AFP
|
Getty
Images

OpenAI
on
Thursday
backtracked
on
a
controversial
decision
to,
in
effect,
make
former
employees
choose
between
signing
a
non-disparagement
agreement
that
would
never
expire,
or
keeping
their
vested
equity
in
the
company.

The
internal
memo,
which
was
viewed
by
CNBC,
was
sent
to
former
employees
and
shared
with
current
ones.

The
memo,
addressed
to
each
former
employee,
said
that
at
the
time
of
the
person’s
departure
from
OpenAI,
“you
may
have
been
informed
that
you
were
required
to
execute
a
general
release
agreement
that
included
a
non-disparagement
provision
in
order
to
retain
the
Vested
Units
[of
equity].”

“Regardless
of
whether
you
executed
the
Agreement,
we
write
to
notify
you
that
OpenAI
has
not
canceled,
and
will
not
cancel,
any
Vested
Units,”
stated
the
memo,
which
was
viewed
by
CNBC.

The
memo
said
OpenAI
will
also
not
enforce
any
other
non-disparagement
or
non-solicitation
contract
items
that
the
employee
may
have
signed.

“As
we
shared
with
employees,
we
are
making
important
updates
to
our
departure
process,”
an
OpenAI
spokesperson
told
CNBC
in
a
statement.

“We
have
not
and
never
will
take
away
vested
equity,
even
when
people
didn’t
sign
the
departure
documents.
We’ll
remove
nondisparagement
clauses
from
our
standard
departure
paperwork,
and
we’ll
release
former
employees
from
existing
nondisparagement
obligations
unless
the
nondisparagement
provision
was
mutual,”
said
the
statement,
adding
that
former
employees
would
be
informed
of
this
as
well.

“We’re
incredibly
sorry
that
we’re
only
changing
this
language
now;
it
doesn’t
reflect
our
values
or
the
company
we
want
to
be,”
the
OpenAI
spokesperson
added.


Bloomberg

first
reported
on
the
release
from
the
non-disparagement
provision.

Vox

first
reported
on
the
existence
of
the
NDA
provision.

The
news
comes
amid
mounting
controversy
for
OpenAI
over
the
past
week
or
so.

On
Monday

one
week
after

OpenAI
 debuted
a
range
of
audio
voices
for
ChatGPT


the
company
announced

it
would
pull
one
of
the
viral
chatbot’s
voices
named
“Sky.”

OpenAI's chatbot voice in focus


watch
now

“Sky”
created
controversy
for
resembling
the
voice
of
actress
Scarlett
Johansson
in
“Her,”
a
movie
about
artificial
intelligence.
The
Hollywood
star
has
alleged
that

OpenAI
ripped
off
her
voice

even
though
she
declined
to
let
them
use
it.

“We’ve
heard
questions
about
how
we
chose
the
voices
in
ChatGPT,
especially
Sky,”
the Microsoft-backed
company
posted
on
X.
“We
are
working
to
pause
the
use
of
Sky
while
we
address
them.”

Also
last
week,

OpenAI
disbanded
its
team

focused
on
the
long-term
risks
of artificial
intelligence
 just
one
year
after
the
company
announced
the
group,
a
person
familiar
with
the
situation
confirmed
to
CNBC
on
Friday.

The
person,
who
spoke
to
CNBC
on
condition
of
anonymity,
said
some
of
the
team
members
are
being
reassigned
to
multiple
other
teams
within
the
company.

The
news
came
days
after
both
team
leaders,
OpenAI
co-founder

Ilya
Sutskever
and
Jan
Leike, announced
their
departures
.
Leike
on
Friday
wrote
that
OpenAI’s
“safety
culture
and
processes
have
taken
a
backseat
to
shiny
products.”

OpenAI’s

Superalignment
team
, which
was
formed
last
year,
has
focused
on
“scientific
and
technical
breakthroughs
to
steer
and
control
AI
systems
much
smarter
than
us.”
At
the
time,
OpenAI
said
it
would
commit
20%
of
its
computing
power
to
the
initiative
over
four
years.

The
company
did
not
provide
a
comment
on
the
record
and
instead
directed
CNBC
to
co-founder
and
CEO

Sam
Altman’s
recent
post
 on
X,
where
he
shared
that
he
was
sad
to
see
Leike
leave
and
that
the
company
had
more
work
to
do.

On
Saturday,
OpenAI
co-founder

Greg
Brockman posted
 a
statement
attributed
to
both
himself
and
Altman
on
X,
asserting
that
the
company
has
“raised
awareness
of
the
risks
and
opportunities
of
AGI
[artificial
general
intelligence]
so
that
the
world
can
better
prepare
for
it.”