Sundar
Pichai
and
Tim
Cook
Source:
Reuters;
Apple
Apple
said
on
Monday
that
the
artificial
intelligence
models
underpinning
Apple
Intelligence,
its
AI
system,
were
pretrained
on
processors
designed
by
Google,
a
sign
that
Big
Tech
companies
are
looking
for
alternatives
to
Nvidia
when
it
comes
to
the
training
of
cutting-edge
AI.
Apple’s
choice
of
Google’s
homegrown
Tensor
Processing
Unit
(TPU)
for
training
was
detailed
in
a
technical
paper
just
published
by
the
company.
Separately,
Apple
released
a
preview
version
of
Apple
Intelligence
for
some
devices
on
Monday.
Nvidia’s
pricey
graphics
processing
units
(GPUs)
dominate
the
market
for
high-end
AI
training
chips,
and
have
been
in
such
high
demand
over
the
past
couple
years
that
they’ve
been
difficult
to
procure
in
the
required
quantities.
OpenAI,
Microsoft,
and
Anthropic
are
all
using
Nvidia’s
GPUs
for
their
models,
while
other
tech
companies,
including
Google,
Meta,
Oracle
and
Tesla
are
snapping
them
up
to
build
out
their
AI
systems
and
offerings.
Meta
CEO
Mark
Zuckerberg
and
Alphabet
CEO
Sundar
Pichai
both
made
comments
last
week
suggesting
that
their
companies
and
others
in
the
industry
may
be
overinvesting
in
AI
infrastructure,
but
acknowledged
the
business
risk
of
doing
otherwise
was
too
high.
“The
downside
of
being
behind
is
that
you’re
out
of
position
for
like
the
most
important
technology
for
the
next
10
to
15
years,”
Zuckerberg
said
on
a podcast
with
Bloomberg’s
Emily
Chang.
Apple
doesn’t
name
Google
or
Nvidia
in
its
47-page
paper,
but
did
note
its
Apple
Foundation
Model
(AFM)
and
AFM
server
are
trained
on
“Cloud
TPU
clusters.”
That
means
Apple
rented
servers
from
a
cloud
provider
to
perform
the
calculations.
“This
system
allows
us
to
train
the
AFM
models
efficiently
and
scalably,
including
AFM-on-device,
AFM-server,
and
larger
models,”
Apple
said
in
the
paper.
Representatives
for
Apple
and
Google
didn’t
respond
to
requests
for
comment.
watch
now
Apple
was
later
to
reveal
its
AI
plans
than
many
of
its
peers,
which
loudly
embraced
generative
AI
soon
after
OpenAI’s
launch
of
ChatGPT
in
late
2022.
On
Monday,
Apple
introduced
Apple
Intelligence.
The
system
includes
several
new
features,
such
as
a
refreshed
look
for
Siri,
better
natural
language
processing
and
AI-generated
summaries
in
text
fields.
Over
the
next
year,
Apple
plans
to
roll
out
functions
based
on
generative
AI,
including
image
generation,
emoji
generation
and
a
powered-up
Siri
that
can
access
the
user’s
personal
information
and
take
actions
inside
of
apps.
In
Monday’s
paper,
Apple
said
that
AFM
on-device
was
trained
on
a
single
“slice”
of
2048
TPU
v5p
chips
working
together.
That’s
the
most
advanced
TPU,
first
launched
in
December.
AFM-server
was
trained
on
8192
TPU
v4
chips
that
were
configured
to
work
together
as
eight
slices
through
a
data
center
network,
according
to
the
paper.
Google’s
latest
TPUs
cost
under
$2
per
hour
the
chip
is
in
use
when
booked
for
three
years
in
advance,
according
to
Google’s
website.
Google
first
introduced
its
TPUs
in
2015
for
internal
workloads,
and
made
them
available
to
the
public
in
2017.
They’re
now
among
the
most
mature
custom
chips
designed
for
artificial
intelligence.
Still,
Google
is
one
of
Nvidia’s
top
customers.
It
uses
Nvidia’s
GPUs
its
own
TPUs
for
training
AI
systems,
and
also
sells
access
to
Nvidia’s
technology
on
its
cloud.
Apple
previously
said
that
inferencing,
which
means
taking
a
pretrained
AI
model
and
running
it
to
generate
content
or
make
predictions,
would
partially
happen
on
Apple’s
own
chips
in
its
data
centers.
This
is
the
second
technical
paper
about
Apple’s
AI
system,
after
a
more
general
version
was
published
in
June.
Apple
said
at
the
time
that
it
was
using
TPUs
as
it
developed
its
AI
models.
Apple
is
scheduled
to
report
quarterly
results
after
the
close
of
trading
on
Thursday.
watch
now