Omar
Marques
|
Sopa
Images
|
Lightrocket
|
Getty
Images
Meta’s
hefty
investment
in
artificial
intelligence
includes
development
of
an
AI
system
designed
to
power
Facebook’s
entire
video
recommendation
engine
across
all
its
platforms,
a
company
executive
said
Wednesday.
Tom
Alison,
the
head
of
Facebook,
said
part
of
Meta’s
“technology
roadmap
that
goes
to
2026”
involves
developing
an
AI
recommendation
model
that
can
power
both
the
company’s
TikTok-like
Reels
short
video
service
and
more
traditional,
longer
videos.
To
date,
Meta
has
typically
used
a
separate
model
for
each
of
its
products,
such
as
Reels,
Groups
and
the
core
Facebook
Feed,
Alison
said
onstage
at
Morgan
Stanley’s
tech
conference
in
San
Francisco.
As
part
of
Meta’s
ambitious
foray
into
AI,
the
company
has
been
spending
billions
of
dollars
on
Nvidia
graphics
processing
units,
or
GPUs.
They’ve
become
the
primary
chips
used
by
AI
researchers
for
training
the
types
of
large
language
models
used
to
power
OpenAI’s
popular
ChatGPT
chatbot
and
other
generative
AI
models.
Alison
said
“phase
1”
of
Meta’s
tech
roadmap
involved
switching
the
company’s
current
recommendation
systems
to
GPUs
from
more
traditional
computer
chips,
helping
to
improve
the
overall
performance
of
products.
As
interest
in
LLMs
exploded
last
year,
Meta
executives
were
struck
by
how
these
big
AI
models
could
“handle
lots
of
data
and
all
kinds
of
very
general-purpose
types
of
activities
like
chatting,”
Alison
said.
Meta
came
to
see
the
possibility
of
a
giant
recommendation
model
that
could
be
used
across
products,
and
by
last
year,
built
“this
kind
of
new
model
architecture,”
Alison
said,
adding
that
the
company
tested
it
on
Reels.
This
new
“model
architecture”
helped
Facebook
obtain
“an
8%
to
10%
gain
in
Reels
watch
time”
on
the
core
Facebook
app,
which
Alison
said
helped
prove
that
the
model
was
“learning
from
the
data
much
more
efficiently
than
the
previous
generation.”
“We’ve
really
focused
on
kind
of
investing
more
in
making
sure
that
we
can
scale
these
models
up
with
the
right
kind
of
hardware,”
he
said.
Meta
is
now
in
“phase
3”
of
its
re-architecture
of
the
system,
which
involves
trying
to
validate
the
technology
and
push
it
across
multiple
products.
watch
now
“Instead
of
just
powering
Reels,
we’re
working
on
a
project
to
power
our
entire
video
ecosystem
with
this
single
model,
and
then
can
we
add
our
Feed
recommendation
product
to
also
be
served
by
this
model,”
Alison
said.
“If
we
get
this
right,
not
only
will
the
recommendations
be
kind
of
more
engaging
and
more
relevant,
but
we
think
the
responsiveness
of
them
can
improve
as
well.”
Illustrating
out
how
it
will
work
if
successful,
Alison
said,
“If
you
see
something
that
you’re
into
in
Reels,
and
then
you
go
back
to
the
Feed,
we
can
kind
of
show
you
more
similar
content.”
Alison
said
Meta
has
accumulated
a
massive
stockpile
of
GPUs
that
will
be
used
to
help
its
broader
generative
AI
efforts,
such
as
development
of
digital
assistants.
Some
generative
AI
projects
Meta
is
considering
include
incorporating
more
sophisticated
chatting
tools
into
its
core
Feed
so
a
person
who
sees
a
“recommended
post
about
Taylor
Swift,”
could
perhaps
“easily
just
click
a
button
and
say,
‘Hey
Meta
AI,
tell
me
more
about
what
I’m
seeing
with
Taylor
Swift
right
now.'”
Meta
is
also
experimenting
with
integrating
its
AI
chatting
tool
within
Groups,
so
a
member
of
a
Facebook
baking
group
could
potentially
ask
a
question
about
desserts
and
get
an
answer
from
a
digital
assistant.
“I
think
we
have
the
opportunity
to
put
generative
AI
in
kind
of
a
multiplayer
kind
of
consumer
environment,”
Alison
said.
Don’t
miss
these
stories
from
CNBC
PRO: