Mark
Zuckerberg
told
the
world
in
October
2021
that
he
was
rebranding
Facebook
to
Meta
as
the
company
pushes
toward
the
metaverse.
Facebook
|
via
Reuters
Meta
has
built
custom
computer
chips
to
help
with
its
artificial
intelligence
and
video-processing
tasks
and
is
talking
about
them
in
public
for
the
first
time.
The
social
networking
giant
disclosed
its
internal
silicon
chip
projects
for
the
first
time
to
reporters
earlier
this
week,
ahead
of
a
virtual
event
Thursday
discussing
its
AI
technical
infrastructure
investments.
Investors
have
been
closely
watching
Meta’s
investments
into
AI
and
related
data
center
hardware
as
the
company
embarks
on
a
“year
of
efficiency”
that
includes
at
least
21,000
layoffs
and
major
cost
cutting.
Although
it’s
expensive
for
a
company
to
design
and
build
its
own
computer
chips,
vice
president
of
infrastructure
Alexis
Bjorlin
told
CNBC
that
Meta
believes
that
the
improved
performance
will
justify
the
investment.
The
company
has
also
been
overhauling
its
data
center
designs
to
focus
more
on
energy-efficient
techniques,
such
as
liquid
cooling,
to
reduce
excess
heat.
One
of
the
new
computer
chips,
the
Meta
Scalable
Video
Processor,
or
MSVP,
is
used
to
process
and
transmit
video
to
users
while
cutting
down
on
energy
requirements.
Bjorlin
said
“there
was
nothing
commercially
available”
that
could
handle
the
task
of
processing
and
delivering
4
billion
videos
a
day
as
efficiently
as
Meta
wanted.
The
other
processor
is
the
first
in
the
company’s
Meta
Training
and
Inference
Accelerator,
or
MTIA,
family
of
chips
intended
to
help
with
various
AI-specific
tasks.
The
new
MTIA
chip
specifically
handles
“inference,”
which
is
when
an
already
trained
AI
model
makes
a
prediction
or
takes
an
action.
Bjorlin
said
that
the
new
AI
inference
chip
helps
power
some
of
Meta’s
recommendation
algorithms
used
to
show
content
and
ads
in
people’s
news
feeds.
She
declined
to
answer
who
is
manufacturing
the
chip,
but
a
blog
post
said
the
processor
is
“fabricated
in
TSMC
7nm
process,”
indicating
that
chip
giant
Taiwan
Semiconductor
Manufacturing
is
producing
the
technology.
She
said
Meta
has
a
“multi-generational
roadmap”
for
its
family
of
AI
chips
that
include
processors
used
for
the
task
of
training
AI
models,
but
she
declined
to
offer
details
beyond
the
new
inference
chip.
Reuters
previously
reported
that
Meta
canceled
one
AI
inference
chip
project
and
started
another
that
was
supposed
to
roll
out
around
2025,
but
Bjorlin
declined
to
comment
on
that
report.
Because
Meta
isn’t
in
the
business
of
selling
cloud
computing
services
like
companies
including
Google
parent
Alphabet
or
Microsoft,
the
company
didn’t
feel
compelled
to
publicly
talk
about
its
internal
data
center
chip
projects,
she
said.
“If
you
look
at
what
we’re
sharing
—
our
first
two
chips
that
we
developed
—
it’s
definitely
giving
a
little
bit
of
a
view
into
what
are
we
doing
internally,”
Bjorlin
said.
“We
haven’t
had
to
advertise
this,
and
we
don’t
need
to
advertise
this,
but
you
know,
the
world
is
interested.”
Meta
vice
president
of
engineering
Aparna
Ramani
said
the
company’s
new
hardware
was
developed
to
work
effectively
with
its
home-grown
PyTorch
software,
which
has
become
one
of
the
most
popular
tools
used
by
third-party
developers
to
create
AI
apps.
The
new
hardware
will
eventually
be
used
to
power
metaverse-related
tasks,
such
as
virtual
reality
and
augmented
reality,
as
well
as
the
burgeoning
field
of
generative
AI,
which
generally
refers
to
AI
software
that
can
create
compelling
text,
images
and
videos.
Ramani
also
said
Meta
has
developed
a
generative
AI-powered
coding
assistant
for
the
company’s
developers
to
help
them
more
easily
create
and
operate
software.
The
new
assistant
is
similar
to
Microsoft’s
GitHub
Copilot
tool
that
it
released
in
2021
with
help
from
the
AI
startup
OpenAI.
In
addition,
Meta
said
it
completed
the
second-phase,
or
final,
buildout
of
its
supercomputer
dubbed
Research
SuperCluster,
or
RSC,
which
the
company
detailed
last
year.
Meta
used
the
supercomputer,
which
contains
16,000
Nvidia
A100
GPUs,
to
train
the
company’s
LLaMA
language
model,
among
other
uses.
Ramani
said
Meta
continues
to
act
on
its
belief
that
it
should
contribute
to
open-source
technologies
and
AI
research
in
order
to
push
the
field
of
technology.
The
company
has
disclosed
that
its
biggest
LLaMA
language
model,
LLaMA
65B,
contains
65
billion
parameters
and
was
trained
on
1.4
trillion
tokens,
which
refers
to
the
data
used
for
AI
training.
Companies
such
as
OpenAI
and
Google
have
not
publicly
disclosed
similar
metrics
for
their
competing
large
language
models,
although
CNBC
reported
this
week
that
Google’s
PaLM
2
model
was
trained
on
3.6
trillion
tokens
and
contains
340
billion
parameters.
Unlike
other
tech
companies,
Meta
released
its
LLaMA
language
model
to
researchers
so
they
can
learn
from
the
technology.
However,
the
LlaMA
language
model
was
then
leaked
to
the
wider
public,
leading
to
many
developers
building
apps
incorporating
the
technology.
Ramani
said
Meta
is
“still
thinking
through
all
of
our
open
source
collaborations,
and
certainly,
I
want
to
reiterate
that
our
philosophy
is
still
open
science
and
cross
collaboration.”