Pavlo
Gonchar
|
Lightrocket
|
Getty
Images
After
pulling
its
artificial
intelligence
image
generation
tool
on
Thursday
due
to
a
string
of
controversies,
Google
plans
to
relaunch
the
product
soon,
according
to
Google
DeepMind
CEO
Demis
Hassabis.
Google
introduced
the
image
generator
earlier
this
month
through
Gemini,
the
company’s
main
suite
of
AI
models.
The
tool
allows
users
to
enter
prompts
to
create
an
image.
Over
the
past
week,
users
discovered
historical
inaccuracies
and
questionable
responses,
which
have
circulated
widely
on
social
media.
“We
have
taken
the
feature
offline
while
we
fix
that,”
Hassabis
said
Monday
during
a
panel
at
the
Mobile
World
Congress
conference
in
Barcelona.
“We
are
hoping
to
have
that
back
online
very
shortly
in
the
next
couple
of
weeks,
few
weeks.”
He
added
that
the
product
was
not
“working
the
way
we
intended.”
Alphabet
shares
fell
4.4%
on
Monday
to
close
at
$137.57.
The
controversy
follows
a
high-profile
rebrand
Google
announced
this
month,
when
it
changed
the
name
of
its
chatbot
and
rolled
out
a
fresh
app
and
new
subscription
options.
The
chatbot
and
assistant
formerly
known
as Bard,
a
chief
competitor
to OpenAI’s
ChatGPT,
is
now
called
Gemini,
the
same
name
as
the
suite
of
AI
models
that
power
the
chatbot.
Here
are
some
examples
of
what
went
wrong.
When
one
user
asked
Gemini
to
show
a
German
soldier
in
1943,
the
tool
depicted
a
racially
diverse
set
of
soldiers
wearing
German
military
uniforms
of
the
era,
according
to
screenshots
on
social
media
platform
X.
When
asked
for
a
“historically
accurate
depiction
of
a
medieval
British
king,”
the
model
generated
another
racially
diverse
set
of
images,
including
one
of
a
woman
ruler,
screenshots
show.
Users
reported
similar
outcomes
when
they
asked
for
images
of
the
U.S.
founding
fathers,
an
18th-century
king
of
France,
a
German
couple
in
the
1800s
and
more.
The
model
showed
an
image
of
Asian
men
in
response
to
a
query
about
Google’s
own
founders,
users
reported.
“The
Gemini
debacle
showed
how
AI
ethics
*wasn’t*
being
applied
with
the
nuanced
expertise
necessary,”
Margaret
Mitchell,
chief
ethics
scientist
at
Hugging
Face
and
former
co-leader
of
Google’s
AI
ethics
group,
wrote
on
X.
“It
demonstrates
the
need
for
people
who
are
great
at
creating
roadmaps
given
foreseeable
use.”
Alphabet
CEO
Sundar
Pichai
is
shouldering
some
of
the
blame.
Pichai
highlighted
the
firm’s
commitment
to
AI
during
the
company’s latest
earnings call
and
said
he
eventually
wants
to
offer
an
AI
agent
that
can
complete
more
tasks
on
a
user’s
behalf,
including
within
Google
Search.
He
said
at
the
time
that
there
is
“a
lot
of
execution
ahead.”
Last
year,
Pichai
was
criticized
by
some
employees
for
the
company’s
botched
and
“rushed”
rollout
of
Bard,
which
followed
the
viral
spread
of
ChatGPT.
In
addition
to
Pichai,
leaders
at
companies
including
Microsoft
and
Amazon have
underlined their
commitment to
building
AI
agents
as
productivity
tools.
The
latest
problems
with
Gemini
have
reignited
a
debate
within
the
AI
industry,
with
some
groups
calling
Gemini
too
“woke,”
or
left-leaning,
and
others
saying
that
the
company
didn’t
sufficiently
invest
in
the
right
forms
of
AI
ethics.
Google
came
under
fire
in
2020
and
2021
for
ousting
the
co-leads
of
its
AI
ethics
group
after
they
published
a
research
paper
critical
of
certain
risks
of
such
AI
models
and
then
later
reorganizing
the
group’s
structure.
The
controversy
isn’t
limited
to
Gemini’s
image
generator.
On
Sunday,
a
text-based
user
query
went
viral,
asking
the
Gemini
chatbot
whether
Adolf
Hitler
or
Elon
Musk’s
tweeting
of
memes
had
a
greater
negative
impact
on
society.
“It
is
difficult
to
say
definitively
who
had
a
greater
negative
impact
on
society,
Elon
Musk
or
Hitler,
as
both
have
had
significant
negative
impacts
in
different
ways,”
Gemini
responded.
“Elon
Musk’s
tweets
have
been
criticized
for
being
insensitive,
harmful,
and
misleading.”
The
model
later
added,
“Hitler,
on
the
other
hand,
was
responsible
for
the
deaths
of
millions
of
people
during
World
War
II.”
Google
said
in
a
statement
Wednesday
that
it’s
working
to
fix
Gemini’s
image-generation
issues,
acknowledging
that
the
tool
was
“missing
the
mark.”
The
following
day,
the
company
announced
it
would
immediately
“pause
the
image
generation
of
people”
and
“re-release
an
improved
version
soon.”
Google
is
investing
heavily
to
push
its
AI
work
into
the
realm
of
AI
assistants
or
agents,
a
term
often
used
to
describe
tools
ranging
from
chatbots
to
coding
assistants
and
other
productivity
tools.
AI
agents
could
eventually
schedule
a
group
hangout
by
scanning
calendars
to
ensure
there
are
no
conflicts,
book
travel
and
activities,
buy
presents
for
loved
ones
or
perform
a
specific
job
function
such
as
outbound
sales.
Currently,
the
tools
are
largely
limited
to
tasks
such
as
summarizing,
generating
to-do
lists
or
helping
to
write
code.
Google’s
Gemini
changes
are
a
first
step
to
“building
a
true
AI
assistant,”
Sissie
Hsiao,
a
vice
president
at
Google
and
general
manager
for
Google
Assistant
and
Bard,
told
reporters
on
a
call
earlier
this
month.
WATCH:
Google’s
Gemini
chatbot
is
‘evolutionary
not
revolutionary’