Tech
NSFW

Don’t read this at work

Portrait of a Businessman
(CSA-Printstock/Getty Images)

Corporate surveillance technology is out of control

A new report details the disturbing ways your employer can monitor your life even out of the office.

When you log on to your work computer, or swipe your badge to get into your office, your expectation of privacy should change significantly — especially if your job involves sensitive legal, financial, or medical information. Most employees generally accept that their emails and web usage are subject to monitoring as part of corporate cybersecurity policies.

But what you might not be aware of is that today's workplace surveillance tools may be capturing your keystrokes, peeking into your clipboard, analyzing transcripts of your Zoom meetings and phone calls, and even monitoring your physical movements. This vast portfolio of data is then used to develop individual AI-generated "risk scores" that make inferences about your psychological condition and personal life outside of work. Global information security spending is estimated to be $183 billion for 2024, and is expected to grow 15% by 2025, according to a report from Gartner.  

A new report details the extensive capabilities found in today's workplace panopticon, built by companies like Microsoft, IBM, and a company now called Everfox, the public sector business spun off from Forcepoint. 

Wolfie Christl, a privacy researcher at Cracked Labs, authored the report and acknowledged the legitimate uses of this technology in specific industries, but advocated for limiting its use. 

"Applying intrusive surveillance to some employees with access to specifically sensitive resources certainly does not automatically justify applying the same level of surveillance to large groups of employees or an organization’s entire staff," Christl told Sherwood. 

Risky Humans

As cybersecurity incidents increase, tech companies have identified the biggest risk that companies face — the humans that work for them. Forcepoint, a cybersecurity company that was owned by defense contractor Raytheon until 2021, when it was sold to a private equity group, referred to “humans” as “the number one source of risk to organizations,” in a presentation from 2017.  In an online marketing brochure, the company said that it offered employers “an ‘over-the-shoulder’ view of the end-user’s work-station.”

Humans are increasingly the number one source of risk to organizations
Screenshot of Forcepoint presentation from 2017.

Forcepoint offered "behavior-based solutions" that "prevents confidential data from leaving the corporate network, and eliminates breaches caused by insiders." In 2023, Forcepoint sold its public sector business (Forcepoint Federal) to a private equity firm for $2.45 billion, and rebranded as Everfox. 

Case studies on Forcepoint's website lists customers from a wide range of industries, such as airlines, unnamed defense contractors in the US and Italy, healthcare companies, energy firms, banks, a law enforcement agency in the Philippines, and local governments in Mexico and Italy. Forcepoint has been awarded contracts with the US government totalling more than $369 million since 2010, with the lion’s share coming from the Department of Defense. 

Forcepoint gave customers a menu of pre-built employee surveillance scenarios and "behavioral models" that look for telltale signs of unwanted workplace behavior. Such behavior includes exporting confidential data outside the company, engaging in corporate espionage to aid competitors, or abnormal log-in activity on employee accounts. 

But the list also included models that look for "negative" and "illicit" workplace behavior, such as whistleblowing, signs that employees may be looking to leave the company (such as emailing a resume), or flagging employees engaged in "financial distress communications" which may indicate "financial turmoil." 

According to an online training document, Forcepoint also categorized websites visited by employees into "risk classes" that may veer into employees' personal lives and efforts to unionize. Under "productivity loss," it listed "abortion," "drugs," and "worker organizations.”

Forcepoint used a machine learning technique called "sentiment analysis" to infer emotions from employee communications, matching words from a dictionary of terms that may indicate a problem brewing with a worker. Some of the words included in this list: abort, addicted, anger, disappointed, mockery, stain, and vengeance. Scoring and ranking employees by the "negative sentiment" in their emails and meetings can create false positives, as computers are notoriously bad at understanding sarcasm.

Everfox declined to comment for this story. 

The threat is coming from inside the company

Microsoft offers a powerful set of tools to monitor employees in similar ways. Microsoft Purview is a product that offers several ways to guard against "insider risks" and "communication compliance". One type of vulnerability that Microsoft's tools guard against is insider threats. In a training document on Microsoft's website, examples of "employee stressor events" are listed which may flag a worker for closer monitoring, including "a poor performance review, a position demotion, or the user being placed on a performance review plan." 

Leading indicators for malicious insider risks - screenshot
A screenshot of a Microsoft presentation title “Insider Risk Management”.

Other types of behavior which might trigger an elevated risk score for an employee include mounting USB devices at unusual hours, printing sensitive documents, or downloading a large number of files. A support page for Microsoft Sentinel acknowledges that the system can incorrectly flag an employee of suspicious activity. "No analytics rule is perfect, and you're bound to get some false positives that need handling," the document says.

Microsoft makes it easy to deploy such surveillance alongside its ubiquitous Office365 productivity suite, which increases the likelihood of unnecessary surveillance, said Christl. 

"Microsoft carries a specific responsibility here. The findings in my report show that Microsoft is not only making it easy to implement intrusive surveillance, but often even recommends the more intrusive options in its software documentation and in the user interface," Christl said. 

A spokesperson for Microsoft told Sherwood News in an emailed statement, "At Microsoft, we think using technology to track employees is both counterproductive and wrong. Microsoft has consistently emphasized digital empathy — the philosophy that organizations' cyber risk leaders need to have open and transparent conversations with employees about the security and compliance policies an enterprise has in place to satisfy applicable laws, industry requirements and leaders' varying risk tolerances.”

People are worried about workplace surveillance 

As AI monitoring tools become increasingly incorporated into workplace infrastructure, the public has concerns about this surveillance going too far. 

A 2023 Pew Research survey found that while many people were in favor of employers using AI tools in some scenarios, such as monitoring drivers making company trips or interacting with retail customers, a majority of Americans opposed the tracking of workers' movements, monitoring when employees were at their desks, and close monitoring of work computer activity. This opposition was especially prevalent among younger workers. 

The survey also looked at how AI-powered worker monitoring could be misused. A majority of poll respondents said monitoring would lead to workers feeling like they were being inappropriately watched and that information collected about them could be abused.

However, one group welcomed the use of AI in one aspect of the workplace. Of the 74% of respondents who said that bias and unfair treatment based on race or ethnicity is a problem, almost half of them (46%) thought the use of AI in performance evaluations would make things better.

Stopping the spying bosses

While Congress has yet to pass any significant legislation regulating AI, a bill placing limits on workplace surveillance is currently making its way through the House. 

Sponsored by Rep. Christopher Deluzio and co-sponsored by Rep. Suzanne Bonamici, the "Stop Spying Bosses Act" would force employers to disclose to employees exactly how, when and where they are being surveilled. Employees would also be told what kinds of data are being collected and what third parties might get access to them. 

Originally introduced in the Senate, the current version of this bill also puts limits on the data that companies can collect from their employees, such as restricting any data collection that interferes with worker organization efforts, reveals anything about an employee's health status, political views, or religion. Companies would also not be allowed to monitor workers' activities when they are off-duty, or in sensitive locations such as bathrooms or lactation rooms. The bill also calls for the creation of a new Privacy and Technology Division inside the Department of Labor.  

“It’s time to protect employees from the use of invasive surveillance technologies that allow bosses to track their workers minute by minute and move by move,” said Rep. Deluzio in a statement to Sherwood News. “Workers deserve far better than a workday full of endless suspicion and surveillance: they should have a workplace with respect and dignity.”

More Tech

See all Tech
tech

Anthropic launches “Claude Design,” sending shares of Figma and Adobe down

Anthropic has been slowly and steadily gaining a leading share in the enterprise AI market by focusing on coding, spreadsheets, and other common productivity and workplace apps.

Now it’s going after design apps.

Today Anthropic launched Claude Design, a dedicated app powered by its latest model, Claude Opus 4.7, that lets users use text prompts to build website designs, user interface prototypes, presentations, and marketing materials.

Shares of Figma and Adobe sank on the news.

While Claude has previously had the ability to create designs and user interfaces, breaking it out into a dedicated app signals a major new piece of its enterprise strategy alongside its popular Claude Code product.

Today Anthropic launched Claude Design, a dedicated app powered by its latest model, Claude Opus 4.7, that lets users use text prompts to build website designs, user interface prototypes, presentations, and marketing materials.

Shares of Figma and Adobe sank on the news.

While Claude has previously had the ability to create designs and user interfaces, breaking it out into a dedicated app signals a major new piece of its enterprise strategy alongside its popular Claude Code product.

tech

Apple’s China iPhone shipments surged 20% in Q1 even as overall smartphone shipments fell

Apple’s iPhone shipments in China jumped 20% last quarter, even as the country’s overall smartphone market fell 4%, according to new data from Counterpoint Research. Rising memory costs have pushed prices higher across the industry, weighing on demand.

Apple appears poised to ride out the broader smartphone slump. Its strength at the less price-sensitive high end of the market and its unusual leverage over suppliers, which helps keep costs in check, give it an edge over rivals.

Greater China remains a critical region for Apple, making up about 18% of its total revenue in the fourth quarter. The company accounted for 19% of China’s smartphone market in the first quarter, up from 15% a year earlier, per Counterpoint.

tech
Rani Molla

Anthropic has surged past OpenAI in capturing business spending on generative-AI software

Last quarter, Anthropic attracted the lion’s share of trackable business spending on generative-AI software, according to new data from Ramp, a fintech company that provides corporate cards and expense management software for small firms and Fortune 500 companies alike.

The data showed that in the first quarter, Anthropic saw 37% of spending, its biggest share yet, versus 33% for OpenAI. Notably, the dataset doesn’t capture spending by Google or Microsoft.

OpenAI, which makes ChatGPT, still leads in overall adoption at 81% of AI buyers, but Anthropic is catching up, at nearly 63% in March. Overall, more than half of Ramp’s customers currently pay for AI, up from just 18% two years ago.

Anthropic’s enterprise tools, including Claude Code and Cowork, have been making waves among the business class, sending its revenue soaring.

Anthropic’s revenue share is even higher among companies spending on AI for the first time.

“Anthropic has definitely been on a tear,” Ara Kharazian, Ramp’s economist, told Sherwood News. “Its increase in adoption rates has been driven by its ability to sell to less technical users and smaller contracts than it typically has.”

It’s notable that midway through the first quarter, Anthropic had a falling-out with one of its biggest customers, the US government, which near the end of February decided to shun Anthropic’s products and lean into working with OpenAI.

tech
Jon Keegan

Report: Google ditches its objection to defense work, pitches Gemini to Pentagon

In 2018, Google employees protested against the company’s tech being used for the US military’s Project Maven — a drone targeting program — reminding the company of its “don’t be evil” motto.

After the controversy, the company declined to renew the contract with the Pentagon, drawing a bright line between Big Tech and the national security establishment.

What a difference a few years makes.

Google is now actively working to get its Gemini AI model to be used in classified national security settings, according to a new report from The Information. Seeking a similar deal to the one OpenAI hashed out with the Pentagon, Google reportedly wants a contract that allows use of Gemini in classified work, but with a prohibition on mass domestic surveillance and autonomous lethal weapons.

But Google is playing catch-up in a major way. Amazon and Microsoft both have been widely used for classified defense work, and contractors are already experienced in working with their cloud systems, while Google’s services have never been used in classified work.

What a difference a few years makes.

Google is now actively working to get its Gemini AI model to be used in classified national security settings, according to a new report from The Information. Seeking a similar deal to the one OpenAI hashed out with the Pentagon, Google reportedly wants a contract that allows use of Gemini in classified work, but with a prohibition on mass domestic surveillance and autonomous lethal weapons.

But Google is playing catch-up in a major way. Amazon and Microsoft both have been widely used for classified defense work, and contractors are already experienced in working with their cloud systems, while Google’s services have never been used in classified work.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.