Tech
NSFW

Don’t read this at work

Portrait of a Businessman
(CSA-Printstock/Getty Images)

Corporate surveillance technology is out of control

A new report details the disturbing ways your employer can monitor your life even out of the office.

When you log on to your work computer, or swipe your badge to get into your office, your expectation of privacy should change significantly — especially if your job involves sensitive legal, financial, or medical information. Most employees generally accept that their emails and web usage are subject to monitoring as part of corporate cybersecurity policies.

But what you might not be aware of is that today's workplace surveillance tools may be capturing your keystrokes, peeking into your clipboard, analyzing transcripts of your Zoom meetings and phone calls, and even monitoring your physical movements. This vast portfolio of data is then used to develop individual AI-generated "risk scores" that make inferences about your psychological condition and personal life outside of work. Global information security spending is estimated to be $183 billion for 2024, and is expected to grow 15% by 2025, according to a report from Gartner.  

A new report details the extensive capabilities found in today's workplace panopticon, built by companies like Microsoft, IBM, and a company now called Everfox, the public sector business spun off from Forcepoint. 

Wolfie Christl, a privacy researcher at Cracked Labs, authored the report and acknowledged the legitimate uses of this technology in specific industries, but advocated for limiting its use. 

"Applying intrusive surveillance to some employees with access to specifically sensitive resources certainly does not automatically justify applying the same level of surveillance to large groups of employees or an organization’s entire staff," Christl told Sherwood. 

Risky Humans

As cybersecurity incidents increase, tech companies have identified the biggest risk that companies face — the humans that work for them. Forcepoint, a cybersecurity company that was owned by defense contractor Raytheon until 2021, when it was sold to a private equity group, referred to “humans” as “the number one source of risk to organizations,” in a presentation from 2017.  In an online marketing brochure, the company said that it offered employers “an ‘over-the-shoulder’ view of the end-user’s work-station.”

Humans are increasingly the number one source of risk to organizations
Screenshot of Forcepoint presentation from 2017.

Forcepoint offered "behavior-based solutions" that "prevents confidential data from leaving the corporate network, and eliminates breaches caused by insiders." In 2023, Forcepoint sold its public sector business (Forcepoint Federal) to a private equity firm for $2.45 billion, and rebranded as Everfox. 

Case studies on Forcepoint's website lists customers from a wide range of industries, such as airlines, unnamed defense contractors in the US and Italy, healthcare companies, energy firms, banks, a law enforcement agency in the Philippines, and local governments in Mexico and Italy. Forcepoint has been awarded contracts with the US government totalling more than $369 million since 2010, with the lion’s share coming from the Department of Defense. 

Forcepoint gave customers a menu of pre-built employee surveillance scenarios and "behavioral models" that look for telltale signs of unwanted workplace behavior. Such behavior includes exporting confidential data outside the company, engaging in corporate espionage to aid competitors, or abnormal log-in activity on employee accounts. 

But the list also included models that look for "negative" and "illicit" workplace behavior, such as whistleblowing, signs that employees may be looking to leave the company (such as emailing a resume), or flagging employees engaged in "financial distress communications" which may indicate "financial turmoil." 

According to an online training document, Forcepoint also categorized websites visited by employees into "risk classes" that may veer into employees' personal lives and efforts to unionize. Under "productivity loss," it listed "abortion," "drugs," and "worker organizations.”

Forcepoint used a machine learning technique called "sentiment analysis" to infer emotions from employee communications, matching words from a dictionary of terms that may indicate a problem brewing with a worker. Some of the words included in this list: abort, addicted, anger, disappointed, mockery, stain, and vengeance. Scoring and ranking employees by the "negative sentiment" in their emails and meetings can create false positives, as computers are notoriously bad at understanding sarcasm.

Everfox declined to comment for this story. 

The threat is coming from inside the company

Microsoft offers a powerful set of tools to monitor employees in similar ways. Microsoft Purview is a product that offers several ways to guard against "insider risks" and "communication compliance". One type of vulnerability that Microsoft's tools guard against is insider threats. In a training document on Microsoft's website, examples of "employee stressor events" are listed which may flag a worker for closer monitoring, including "a poor performance review, a position demotion, or the user being placed on a performance review plan." 

Leading indicators for malicious insider risks - screenshot
A screenshot of a Microsoft presentation title “Insider Risk Management”.

Other types of behavior which might trigger an elevated risk score for an employee include mounting USB devices at unusual hours, printing sensitive documents, or downloading a large number of files. A support page for Microsoft Sentinel acknowledges that the system can incorrectly flag an employee of suspicious activity. "No analytics rule is perfect, and you're bound to get some false positives that need handling," the document says.

Microsoft makes it easy to deploy such surveillance alongside its ubiquitous Office365 productivity suite, which increases the likelihood of unnecessary surveillance, said Christl. 

"Microsoft carries a specific responsibility here. The findings in my report show that Microsoft is not only making it easy to implement intrusive surveillance, but often even recommends the more intrusive options in its software documentation and in the user interface," Christl said. 

A spokesperson for Microsoft told Sherwood News in an emailed statement, "At Microsoft, we think using technology to track employees is both counterproductive and wrong. Microsoft has consistently emphasized digital empathy — the philosophy that organizations' cyber risk leaders need to have open and transparent conversations with employees about the security and compliance policies an enterprise has in place to satisfy applicable laws, industry requirements and leaders' varying risk tolerances.”

People are worried about workplace surveillance 

As AI monitoring tools become increasingly incorporated into workplace infrastructure, the public has concerns about this surveillance going too far. 

A 2023 Pew Research survey found that while many people were in favor of employers using AI tools in some scenarios, such as monitoring drivers making company trips or interacting with retail customers, a majority of Americans opposed the tracking of workers' movements, monitoring when employees were at their desks, and close monitoring of work computer activity. This opposition was especially prevalent among younger workers. 

The survey also looked at how AI-powered worker monitoring could be misused. A majority of poll respondents said monitoring would lead to workers feeling like they were being inappropriately watched and that information collected about them could be abused.

However, one group welcomed the use of AI in one aspect of the workplace. Of the 74% of respondents who said that bias and unfair treatment based on race or ethnicity is a problem, almost half of them (46%) thought the use of AI in performance evaluations would make things better.

Stopping the spying bosses

While Congress has yet to pass any significant legislation regulating AI, a bill placing limits on workplace surveillance is currently making its way through the House. 

Sponsored by Rep. Christopher Deluzio and co-sponsored by Rep. Suzanne Bonamici, the "Stop Spying Bosses Act" would force employers to disclose to employees exactly how, when and where they are being surveilled. Employees would also be told what kinds of data are being collected and what third parties might get access to them. 

Originally introduced in the Senate, the current version of this bill also puts limits on the data that companies can collect from their employees, such as restricting any data collection that interferes with worker organization efforts, reveals anything about an employee's health status, political views, or religion. Companies would also not be allowed to monitor workers' activities when they are off-duty, or in sensitive locations such as bathrooms or lactation rooms. The bill also calls for the creation of a new Privacy and Technology Division inside the Department of Labor.  

“It’s time to protect employees from the use of invasive surveillance technologies that allow bosses to track their workers minute by minute and move by move,” said Rep. Deluzio in a statement to Sherwood News. “Workers deserve far better than a workday full of endless suspicion and surveillance: they should have a workplace with respect and dignity.”

More Tech

See all Tech
tech

EPA: xAI’s Colossus data center illegally used gas turbines without permits

The Environmental Protection Agency has ruled that xAI violated the law when it used dozens of portable gas generators for its Colossus 1 data center without air quality permits.

When xAI set out to build Colossus 1 in Memphis, Tennessee, CEO Elon Musk wanted to move with unprecedented speed, avoiding all of the red tape that could slow such a big project down.

To power the 1-gigawatt data center, Musk took advantage of a local loophole that allowed portable gas generators to be used without any permits, as long as they did not spend more than 364 days in the same spot. That allowed xAI to bring in dozens of truck-sized gas generators to quickly supply the massive amount of power the data center needed to train xAI’s Grok model.

The new EPA rule says the use of such portable generators falls under federal regulation, and the company did need air quality permits to operate the turbines. xAI is also using dozens of such generators to power its Colossus 2 data center just over the border in Alabama.

To power the 1-gigawatt data center, Musk took advantage of a local loophole that allowed portable gas generators to be used without any permits, as long as they did not spend more than 364 days in the same spot. That allowed xAI to bring in dozens of truck-sized gas generators to quickly supply the massive amount of power the data center needed to train xAI’s Grok model.

The new EPA rule says the use of such portable generators falls under federal regulation, and the company did need air quality permits to operate the turbines. xAI is also using dozens of such generators to power its Colossus 2 data center just over the border in Alabama.

tech

Trump to push Big Tech to fund new power plants as AI drives up electricity costs

President Donald Trump is expected to announce a plan Friday morning that would require Big Tech companies to bid on 15-year contracts for new electricity generation capacity. The move would effectively force companies to help fund new power plants in the PJM region as soaring demand from AI data centers pushes up electricity costs across the US power grid.

Earlier this week, Trump called on tech giants to “pay their own way,” arguing that households and small businesses should not bear the cost of power infrastructure needed to support energy-hungry data centers.

Microsoft quickly responded, saying it would “pay utility rates that are high enough to cover our electricity costs,” along with committing to other changes aimed at easing pressure on the grid. Other major tech companies are expected to follow suit, though Wedbush Securities analyst Dan Ives warned the added costs could slow the pace of data center build-outs.

As we’ve noted, forcing tech companies to shoulder higher electricity costs is likely to hit some firms harder than others. Companies like Microsoft, Google, and Amazon can pass at least some of those costs on to customers by selling data center capacity downstream. Meta, in contrast, does not have a cloud business, meaning its AI ambitions lack a direct revenue stream to offset rising power costs.

So far tech stocks don’t appear to be affected much in premarket trading. However utility companies most levered to the AI boom certainly are, with Vistra, Constellation Energy, and Talen Energy deep in the red ahead of the open as analysts at Jefferies warn that these firms face risks from this plan.

Earlier this week, Trump called on tech giants to “pay their own way,” arguing that households and small businesses should not bear the cost of power infrastructure needed to support energy-hungry data centers.

Microsoft quickly responded, saying it would “pay utility rates that are high enough to cover our electricity costs,” along with committing to other changes aimed at easing pressure on the grid. Other major tech companies are expected to follow suit, though Wedbush Securities analyst Dan Ives warned the added costs could slow the pace of data center build-outs.

As we’ve noted, forcing tech companies to shoulder higher electricity costs is likely to hit some firms harder than others. Companies like Microsoft, Google, and Amazon can pass at least some of those costs on to customers by selling data center capacity downstream. Meta, in contrast, does not have a cloud business, meaning its AI ambitions lack a direct revenue stream to offset rising power costs.

So far tech stocks don’t appear to be affected much in premarket trading. However utility companies most levered to the AI boom certainly are, with Vistra, Constellation Energy, and Talen Energy deep in the red ahead of the open as analysts at Jefferies warn that these firms face risks from this plan.

tech
Jon Keegan

OpenAI working to build a US supply chain for its hardware plans, including robots

When OpenAI purchased Jony Ive’s I/O, it entered the hardware business. The company is currently ramping up to produce a mysterious AI-powered gadget.

But OpenAI plans on making more than just consumer gadgets — it also plans on making data center hardware, and even robots.

Bloomberg reports that OpenAI has been on the hunt for US-based suppliers for silicon and motors for robotics, as well as cooling systems for data centers.

AI companies are looking toward robots as a logical next step for finding applications for their models.

OpenAI told Bloomberg that US companies building the AI brains of robots might have an edge against the Chinese hardware manufacturers that are currently making some impressive humanoid robots.

Bloomberg reports that OpenAI has been on the hunt for US-based suppliers for silicon and motors for robotics, as well as cooling systems for data centers.

AI companies are looking toward robots as a logical next step for finding applications for their models.

OpenAI told Bloomberg that US companies building the AI brains of robots might have an edge against the Chinese hardware manufacturers that are currently making some impressive humanoid robots.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.