To review this article, go to My Profile and then View Saved Stories.
To review this article, go to My Profile and then View Saved Stories.
Gabriel Grill Christian Sandvig
It’s probably hard to believe that you’re the target of espionage, however, spying on workers is the next frontier of army AI. Surveillance techniques familiar to authoritarian dictatorships have now been reoriented to target American workers.
Over the past decade, a few dozen corporations have emerged to sell employer subscriptions to facilities such as “open-source intelligence,” “reputation management,” and “internal risk assessment,” equipment that originally evolved through defense contractors for intelligence purposes. Learning resources and new knowledge have become available in recent years, those teams have become significantly more sophisticated. With them, your boss could use complex data analytics to identify work organizations, internal leaks, and corporate naysayers.
Gabriel Grill is a researcher at the Center for Ethics, Society and Computing at the University of Michigan.
Christian Sandvig is director of the Center for Ethics, Society and Informatics and McLuhan Chartered Professor of Information at the University of Michigan.
It’s no secret that unionization is already being monitored through large corporations like Amazon. But the expansion and standardization of the employee tracking team has received little comment, despite its sinister origins. If they’re as tough as they say, or even moving in that direction: We want a public verbal exchange about the wisdom of moving those data munitions into personal hands. Military-grade AI was meant to target our national enemies, nominally under the control of elected democratic governments, with safeguards in place to prevent its use. opposed to citizens. We deserve everyone to get involved so that the same systems can now be widely deployed through anyone who can pay.
FiveCast, for example, started as a counterterrorism startup that promoted the military, but has ceded its equipment to corporations and law enforcement, which can use it to collect and analyze all sorts of publicly available data, aggregating its social networks. Publications. . . Instead of just counting keywords, FiveCast boasts that its “business security” and other offerings can identify networks of people, read text in images, and even stumble upon objects, images, logos, emotions, and concepts in multimedia content. The “supply chain threat management” tool aims to expect long-term disruptions, such as strikes, for corporations.
Netpaintings’ research equipment developed to identify terrorist cells can be used to identify top paint organizers so that employers can illegally fire them before a union is formed. First of all. And quantitative threat assessment methods designed to warn the country of approaching attacks can now indicate investment decisions, such as divestment in areas and suppliers believed to have strong capacity to organize works.
It is not transparent that those teams can live up to their expectations. For example, network investigation strategies map threats through association, which means you may be flagged just for following a specific page or account. These systems can also be tricked through fake content. , which runs smoothly on a large scale with the new generative AI. And some corporations offer complicated device-learning techniques, such as deep learning, to identify content that is annoying, which is intended to sign court cases that can lead to unionization. Sentiment detection has been found to be biased and based on on faulty assumptions.
But the functions of those systems are developing rapidly. Corporations announce that they will soon include next-generation synthetic intelligence technologies in their surveillance tools. be a routine, semi-automatic and anti-union surveillance system.
In addition, those subscriptions facilitate paintings even if they do not paint. It does not matter if a painter regarded as a troublemaker is unhappy; It is possible that the company’s control and security will still stand firm on the accusation and unfairly retaliate against them. Lately it is highly unlikely to determine the accuracy of indistinct aggregate judgments about a workforce’s “emotions” or a company’s public symbol. And the mere presence of those systems likely has a crippling effect on legally protected behavior, adding cadre organization.
Knight
Knight
Matt Burgess
Megan Greenwell
The companies that supply those facilities thrive amid darkness and regulatory neglect. Office surveillance defenses are made of the thinnest fabric. Industry apologists claim their software, sold to help employers “understand the union environment,” is not anti-union. They tout it themselves as promoting “corporate outreach surveillance” and prominently mention that “all Americans are protected by federal, state, and local legislation to work safely. “It is not the fault of the manufacturer if a customer uses this software to infringe on a right to organize or protest.
Watchdog corporations also deflect complaints by claiming they only use public information, such as social media knowledge and news articles. Even when this is true, his argument ignores the consequences of the military’s daily use of surveillance in front of a country’s citizens. Loose society. The same equipment that tracks the movements of Russian tanks in Ukraine deserves not to be handed over to your manager to track you. Intelligence software vendors seem to expect the practice of allowing bosses to intensively scrutinize the lives of their employees unusual for employers to do so continuously, as a proactive measure. While functions have advanced and the prices of knowledge gathering and research have fallen, we now face a long term in which any middle manager can mobilize their own resources. CIA.
This type of surveillance industry is fundamentally incompatible with a democracy. Companies implementing such equipment should be required to publicly disclose this use so that existing legislation can be enforced. And new regulations are urgently needed. Last year, the National Labor Relations Board announced it would seek to ban “intrusive” and “abusive” task monitoring, a vital step. In addition, staff and unions are expected to testify at legislative hearings on long-term regulation of AI and office oversight. We want express regulations on what uses of AI, knowledge resources and strategies are allowed and in what situations they can be used.
These technologies are already sold and deployed abroad and used for cross-border surveillance. At best, an active regulator will have to be a guilty world leader in AI and try to set foreign criteria for office technologies. Without doing this work, multinational corporations with chain of origin will be able to easily circumvent or refine country-specific protections.
In the end, our society may conclude that regulation is not enough: it is the lifestyles of this market position that deserve to be illegal. , emotions and thoughts. Your outdoor life deserves to be your employer’s by default.
WIRED Opinion publishes articles written by external participants representing a wide diversity of viewpoints. Read more reviews here. Send an editorial to ideas@wired. com.
? Understand AI advancements with our Fast Forward newsletter
? Our new podcast for you to have a bright future
How Chripreventher Nolan learned to avoid worrying and loving AI
The Chinese company’s encryption chips have entered the Navy and NASA
Podcasts can trigger a new era of enlightenment
Apple takes on apples in logo battle
Fights are a paradise for LGBTQ players
? Charge through summer with the most productive adapters, power banks, and USB hubs
Benjamin Charles Germain Lee
Brian Kateman
Beth Simone Noveck
Dana Karout
Lauren Larson
Gideon Lichfield
Stephanie Mc Neal
Rico Jameson
More from CABLING
Contact