Meta Platforms Inc., the social media giant that built its business on understanding user behavior, is now turning its data-hungry gaze inward—with predictably explosive results. The company recently informed tens of thousands of U.S. employees that their corporate laptops would begin tracking keystrokes, mouse movements, clicks, and screen activity. The stated purpose: to feed that behavioral data into Meta’s AI models so they can learn how people actually use computers. The reaction was swift and visceral. Within hours, internal comment threads flooded with anger, confusion, and hundreds of emoji reactions that left little doubt about how the workforce felt.
When an engineering manager asked how to opt out, Meta’s chief technology officer, Andrew Bosworth, offered a blunt answer: there is no opt-out, at least not on a company-issued laptop. This is the same company that is also tying AI tool usage to performance reviews, running mandatory “AI Transformation Weeks” to retrain its workforce, and building internal dashboards that gamify how many AI tokens employees consume in a day—a metric so aggressively tracked that some workers began creating AI agents to manage their other AI agents. The entire ecosystem started to resemble a feedback loop consuming itself.
The layoffs made everything worse
None of this is happening in isolation. On April 17, news broke that Meta planned to cut roughly 10% of its workforce—approximately 8,000 people—with the first wave scheduled for May 20. Employees who had spent weeks being told to embrace AI, train with AI, and now have their computer behavior harvested to train AI suddenly faced the grim possibility that they were building their own replacements. The timing was, to put it mildly, devastating. Internal posts described the mood as “incredibly demoralizing.” At least three countdown websites appeared, tracking the days until the layoff date. Employees circulated nihilistic memes. One widely shared internal post simply read: “It does not matter.”
Mark Zuckerberg addressed the data collection at a company-wide meeting, framing it not as surveillance but as a way to teach AI how “smart people use computers to accomplish tasks.” He also noted that AI is “probably one of the most competitive fields in history”—a line that landed differently for employees sitting in their offices, wondering if they would still have a job in three weeks. The disconnect between executive messaging and ground-level reality could not have been more stark.
A broader pattern across tech
What is unfolding at Meta is not unique; it is simply more advanced than at most companies. Microsoft, Coinbase, and Block have all made similar moves recently, restructuring around AI in ways that led to layoffs and internal friction. The difference is that Meta is doing all of it simultaneously and at scale: retraining workers, surveilling their behavior, tying job security to AI adoption metrics, and cutting headcount to fund the entire endeavor. This aggressive, all-in approach has created a perfect storm of employee anxiety and resentment.
At Microsoft, the integration of AI into Office products has led to debates about productivity tracking and job displacement. Coinbase’s restructuring prioritized AI-driven automation over certain human roles. Block, led by Jack Dorsey, has similarly shifted resources toward AI initiatives. Yet none of these companies have combined mandatory behavioral monitoring with performance evaluations tied to AI usage and mass layoffs—all in the same quarter. Meta appears to be operating without a safety net, treating its workforce as both guinea pigs and potential liabilities.
The internal dashboards that track AI token consumption have become a particular point of contention. Some employees report spending hours each day trying to hit arbitrary token quotas, while others have built automated scripts to generate tokens without actually using AI meaningfully. The result is a culture of performative AI engagement, where the metric becomes more important than the output. This is precisely the kind of perverse incentive that organizational psychologists have warned about for decades.
The irony of surveillance
There is a deep irony buried in Meta’s approach. The same company that spent years convincing billions of people to share their personal data willingly—through Facebook, Instagram, WhatsApp, and Oculus—is now discovering that its own employees are far more resistant to being monitored. Perhaps it is because employees understand the nuances of data collection better than the average user. They know that behavioral data can reveal not just how they work, but when they are distracted, tired, or unproductive. They know that such data can be weaponized in performance reviews, layoff decisions, and promotional considerations.
Moreover, the lack of opt-out has generated significant legal and ethical questions. While employers have broad rights to monitor activity on company devices, the specific use of that data to train AI models—models that could eventually automate away the very jobs being monitored—creates a conflict of interest that labor advocates are beginning to scrutinize. Several employees have reportedly contacted legal counsel, and internal discussions about collective action have surfaced on encrypted messaging platforms. Meta may be facing not just a morale crisis, but a potential labor dispute that could set precedents for the entire tech industry.
The gamification of AI consumption has also led to unintended consequences. Some teams have begun pooling their tokens to game the system, while others have discovered ways to generate fake usage without actually engaging with AI tools. This has created a cat-and-mouse dynamic between management and employees, further eroding trust. When a company’s own staff feels compelled to deceive internal metrics, it signals a fundamental breakdown in organizational alignment.
Historical context and future implications
To understand Meta’s current predicament, it is helpful to look back at previous corporate AI rollouts. In the 1990s, companies introducing enterprise resource planning systems often faced similar resistance—employees feared that the new systems would make their skills obsolete. The difference then was that the technology was focused on process optimization, not personal behavioral surveillance. Today’s AI tools are fundamentally different because they are designed to learn from, and potentially replace, cognitive tasks that employees believe are uniquely human.
Meta’s AI Transformation Weeks are mandatory for all U.S. employees, covering topics like prompt engineering, AI ethics, and coding with large language models. While the intent is to upskill the workforce, many employees view these sessions as thinly veiled training for their eventual replacement. The company has also introduced internal certification programs for AI proficiency, which are now tied to eligibility for promotions and bonuses. This creates a coercive environment where employees must embrace AI not because they believe in it, but because their careers depend on it.
The broader tech industry is watching Meta’s experiment closely. If the company succeeds in integrating AI into its internal operations without a catastrophic loss of talent, other firms will follow suit. If it fails—if employee revolt leads to mass resignations or a collapse in morale—it could serve as a cautionary tale. Early signs are not promising. Blind, the anonymous workplace app, has seen a surge of negative reviews from Meta employees, with many citing the AI tracking and layoffs as reasons for seeking new jobs. Anonymous polls suggest that nearly a third of affected workers are actively looking for roles outside the company.
Meanwhile, the countdown websites continue to tick. The first wave of layoffs is set for May 20, and the anxiety is palpable. Employees who have spent years building Meta’s products are now unsure whether their contributions will be recognized—or whether they have been deemed redundant by the very AI systems they are being forced to train. The emotional toll is significant, with reports of increased stress, decreased productivity, and a general sense of betrayal.
As Meta pushes deeper into its AI-driven future, the company must confront an uncomfortable truth: no amount of technology can replace the trust and goodwill of a workforce that feels exploited. The keystroke tracking incident may be just the beginning of a larger reckoning over how tech companies treat their employees in the age of artificial intelligence.
Source: Digital Trends News