Klaus
Vedfelt
|
Digitalvision
|
Getty
Images

Cue
the
George
Orwell
reference.

Depending
on
where
you
work,
there’s
a
significant
chance
that
artificial
intelligence
is
analyzing
your
messages
on
Slack,
Microsoft
Teams,


Zoom

and
other
popular
apps.

Huge
U.S.
employers
such
as


Walmart
,


Delta
Air
Lines
,


T-Mobile
,


Chevron

and


Starbucks
,
as
well
as
European
brands
including
Nestle
and


AstraZeneca
,
have
turned
to
a
seven-year-old
startup,
Aware,
to
monitor
chatter
among
their
rank
and
file,
according
to
the
company.

Jeff
Schumann,
co-founder
and
CEO
of
the
Columbus,
Ohio-based
startup,
says
the
AI
helps
companies
“understand
the
risk
within
their
communications,”
getting
a
read
on
employee
sentiment
in
real
time,
rather
than
depending
on
an
annual
or
twice-per-year
survey.

Using
the
anonymized
data
in
Aware’s
analytics
product,
clients
can
see
how
employees
of
a
certain
age
group
or
in
a
particular
geography
are
responding
to
a
new
corporate
policy
or
marketing
campaign,
according
to
Schumann.
Aware’s
dozens
of
AI
models,
built
to
read
text
and
process
images,
can
also
identify
bullying,
harassment,
discrimination,
noncompliance,
pornography,
nudity
and
other
behaviors,
he
said.

Aware’s
analytics
tool

the
one
that
monitors
employee
sentiment
and
toxicity

doesn’t
have
the
ability
to
flag
individual
employee
names,
according
to
Schumann.
But
its
separate
eDiscovery
tool
can,
in
the
event
of
extreme
threats
or
other
risk
behaviors
that
are
predetermined
by
the
client,
he
added.

CNBC
didn’t
receive
a
response
from
Walmart,
T-Mobile,
Chevron,
Starbucks
or
Nestle
regarding
their
use
of
Aware.
A
representative
from
AstraZeneca
said
the
company
uses
the
eDiscovery
product
but
that
it
doesn’t
use
analytics
to
monitor
sentiment
or
toxicity.
Delta
told
CNBC
that
it
uses
Aware’s
analytics
and
eDiscovery
for
monitoring
trends
and
sentiment
as
a
way
to
gather
feedback
from
employees
and
other
stakeholders,
and
for
legal
records
retention
in
its
social
media
platform.

It
doesn’t
take
a
dystopian
novel
enthusiast
to
see
where
it
could
all
go
very
wrong.

Generative AI is coming to wealth management in a very big way, says Ritholtz's Josh Brown


watch
now

Jutta
Williams,
co-founder
of
AI
accountability
nonprofit
Humane
Intelligence,
said
AI
adds
a
new
and
potentially
problematic
wrinkle
to
so-called
insider
risk
programs,
which
have
existed
for
years
to
evaluate
things
like
corporate
espionage,
especially
within
email
communications.

Speaking
broadly
about
employee
surveillance
AI
rather
than
Aware’s
technology
specifically,
Williams
told
CNBC:
“A
lot
of
this
becomes
thought
crime.”
She
added,
“This
is
treating
people
like
inventory
in
a
way
I’ve
not
seen.”

Employee
surveillance
AI
is
a
rapidly
expanding
but
niche
piece
of
a
larger
AI
market
that’s
exploded
in
the
past
year,
following
the
launch
of
OpenAI’s
ChatGPT
chatbot
in
late
2022.
Generative
AI
quickly
became
the
buzzy
phrase
for
corporate
earnings
calls,
and
some
form
of
the
technology
is
automating
tasks
in
just
about
every
industry,
from
financial
services
and
biomedical
research
to
logistics,
online
travel
and

utilities
.

Aware’s
revenue
has
jumped
150%
per
year
on
average
over
the
past
five
years,
Schumann
told
CNBC,
and
its
typical
customer
has
about
30,000
employees.
Top
competitors
include
Qualtrics,
Relativity,
Proofpoint,
Smarsh
and
Netskope.

By
industry
standards,
Aware
is
staying
quite
lean.
The
company
last
raised
money
in
2021,
when
it
pulled
in
$60
million
in
a

round

led
by


Goldman
Sachs

Asset
Management.
Compare
that
with
large
language
model,
or
LLM,
companies
such
as
OpenAI
and
Anthropic,
which
have
raised
billions
of
dollars
each,
largely
from
strategic
partners.


‘Tracking
real-time
toxicity’

Schumann
started
the
company
in
2017
after
spending
almost
eight
years
working
on
enterprise
collaboration
at
insurance
company
Nationwide.

Before
that,
he
was
an
entrepreneur.
And
Aware
isn’t
the
first
company
he’s
started
that’s
elicited
thoughts
of
Orwell.

In
2005,
Schumann
founded
a
company
called
BigBrotherLite.com.
According
to
his

LinkedIn
profile
,
the
business
developed
software
that “enhanced
the
digital
and
mobile
viewing
experience”
of
the
CBS
reality
series
“Big
Brother.”
In
Orwell’s
classic
novel
“1984,”
Big
Brother
was
the
leader
of
a
totalitarian
state
in
which
citizens
were
under
perpetual
surveillance.


I
built
a
simple
player
focused
on
a
cleaner
and
easier
consumer
experience
for
people
to
watch
the
TV
show
on
their
computer,”
Schumann
said
in
an
email.

At
Aware,
he’s
doing
something
very
different.

Every
year,
the
company
puts
out
a
report
aggregating
insights
from
the
billions

in
2023,
the
number
was
6.5
billion

of
messages
sent
across
large
companies,
tabulating
perceived
risk
factors
and
workplace
sentiment
scores.
Schumann
refers
to
the
trillions
of
messages
sent
across
workplace
communication
platforms
every
year
as
“the
fastest-growing
unstructured
data
set
in
the
world.” 

When
including
other
types
of
content
being
shared,
such
as
images
and
videos,
Aware’s
analytics
AI
analyzes
more
than
100
million
pieces
of
content
every
day.
In
so
doing,
the
technology
creates
a
company
social
graph,
looking
at
which
teams
internally
talk
to
each
other
more
than
others.

“It’s
always
tracking
real-time
employee
sentiment,
and
it’s
always
tracking
real-time
toxicity,”
Schumann
said
of
the
analytics
tool.
“If
you
were
a
bank
using
Aware
and
the
sentiment
of
the
workforce
spiked
in
the
last
20
minutes,
it’s
because
they’re
talking
about
something
positively,
collectively.
The
technology
would
be
able
to
tell
them
whatever
it
was.”

Aware
confirmed
to
CNBC
that
it
uses
data
from
its
enterprise
clients
to
train
its
machine-learning
models.
The
company’s
data
repository
contains
about
6.5
billion
messages,
representing
about
20
billion
individual
interactions
across
more
than
3
million
unique
employees,
the
company
said. 

When
a
new
client
signs
up
for
the
analytics
tool,
it
takes
Aware’s
AI
models
about
two
weeks
to
train
on
employee
messages
and
get
to
know
the
patterns
of
emotion
and
sentiment
within
the
company
so
it
can
see
what’s
normal
versus
abnormal,
Schumann
said.

“It
won’t
have
names
of
people,
to
protect
the
privacy,”
Schumann
said.
Rather,
he
said,
clients
will
see
that
“maybe
the
workforce
over
the
age
of
40
in
this
part
of
the
United
States
is
seeing
the
changes
to
[a]
policy
very
negatively
because
of
the
cost,
but
everybody
else
outside
of
that
age
group
and
location
sees
it
positively
because
it
impacts
them
in
a
different
way.”

FTC scrutinizes megacap's AI deals


watch
now

But
Aware’s
eDiscovery
tool
operates
differently.
A
company
can
set
up
role-based
access
to
employee
names
depending
on
the
“extreme
risk”
category
of
the
company’s
choice,
which
instructs
Aware’s
technology
to
pull
an
individual’s
name,
in
certain
cases,
for
human
resources
or
another
company
representative.

“Some
of
the
common
ones
are
extreme
violence,
extreme
bullying,
harassment,
but
it
does
vary
by
industry,”
Schumann
said,
adding
that
in
financial
services,
suspected
insider
trading
would
be
tracked.

For
instance,
a
client
can
specify
a
“violent
threats”
policy,
or
any
other
category,
using
Aware’s
technology,
Schumann
said,
and
have
the
AI
models
monitor
for
violations
in
Slack,


Microsoft

Teams
and
Workplace
by


Meta
.
The
client
could
also
couple
that
with
rule-based
flags
for
certain
phrases,
statements
and
more.
If
the
AI
found
something
that
violated
a
company’s
specified
policies,
it
could
provide
the
employee’s
name
to
the
client’s
designated
representative.

This
type
of
practice
has
been
used
for
years
within
email
communications.
What’s
new
is
the
use
of
AI
and
its
application
across
workplace
messaging
platforms
such
as
Slack
and
Teams.

Amba
Kak,
executive
director
of
the
AI
Now
Institute
at
New
York
University,
worries
about
using
AI
to
help
determine
what’s
considered
risky
behavior.

“It
results
in
a
chilling
effect
on
what
people
are
saying
in
the
workplace,”
said
Kak,
adding
that
the
Federal
Trade
Commission,
Justice
Department
and
Equal
Employment
Opportunity
Commission
have
all
expressed
concerns
on
the
matter,
though
she
wasn’t
speaking
specifically
about
Aware’s
technology.
“These
are
as
much
worker
rights
issues
as
they
are
privacy
issues.” 

Schumann
said
that
though
Aware’s
eDiscovery
tool
allows
security
or
HR
investigations
teams
to
use
AI
to
search
through
massive
amounts
of
data,
a
“similar
but
basic
capability
already
exists
today”
in
Slack,
Teams
and
other
platforms.

“A
key
distinction
here
is
that
Aware
and
its
AI
models
are
not
making
decisions,”
Schumann
said.
“Our
AI
simply
makes
it
easier
to
comb
through
this
new
data
set
to
identify
potential
risks
or
policy
violations.”


Privacy
concerns

Even
if
data
is
aggregated
or
anonymized,

research
suggests
,
it’s
a
flawed
concept.
A

landmark
study

on
data
privacy
using
1990
U.S.
Census
data
showed
that
87%
of
Americans
could
be
identified
solely
by
using
ZIP
code,
birth
date
and
gender.
Aware
clients
using
its
analytics
tool
have
the
power
to
add
metadata
to
message
tracking,
such
as
employee
age,
location,
division,
tenure
or
job
function. 

“What
they’re
saying
is
relying
on
a
very
outdated
and,
I
would
say,
entirely
debunked
notion
at
this
point
that
anonymization
or
aggregation
is
like
a
magic
bullet
through
the
privacy
concern,”
Kak
said.

Additionally,
the
type
of
AI
model
Aware
uses
can
be
effective
at
generating
inferences
from
aggregate
data,
making
accurate
guesses,
for
instance,
about
personal
identifiers
based
on
language,
context,
slang
terms
and
more,
according
to

recent
research
.

“No
company
is
essentially
in
a
position
to
make
any
sweeping
assurances
about
the
privacy
and
security
of
LLMs
and
these
kinds
of
systems,”
Kak
said.
“There
is
no
one
who
can
tell
you
with
a
straight
face
that
these
challenges
are
solved.”

And
what
about
employee
recourse?
If
an
interaction
is
flagged
and
a
worker
is
disciplined
or
fired,
it’s
difficult
for
them
to
offer
a
defense
if
they’re
not
privy
to
all
of
the
data
involved,
Williams
said.

“How
do
you
face
your
accuser
when
we
know
that
AI
explainability
is
still
immature?”
Williams
said.

Schumann
said
in
response:
“None
of
our
AI
models
make
decisions
or
recommendations
regarding
employee
discipline.”

“When
the
model
flags
an
interaction,”
Schumann
said,
“it
provides
full
context
around
what
happened
and
what
policy
it
triggered,
giving
investigation
teams
the
information
they
need
to
decide
next
steps
consistent
with
company
policies
and
the
law.”


WATCH:


AI
is
‘really
at
play
here’
with
the
recent
tech
layoffs

AI is 'really at play here' with the recent tech layoffs, says Jason Greer


watch
now