Tech giants have in recent months been helping state legislators craft industry-friendly bills to regulate artificial intelligence, borrowing from Amazon’s playbook in writing data privacy legislation that went on to become a “model” in several of the states that have so far passed such laws. One company, the global human resources and analytics software powerhouse Workday, has stood out for its aggressive — and successful — effort to promote its own model for AI legislation in state after state, according to civil liberties advocates and text in several state bills that matches parts of a Workday legislative proposal obtained by Recorded Future News. The company is a top developer of HR and other workforce tools that harness AI, and it has been pushing a multi-state model for how to regulate the technology, including in the workplace. Microsoft, which owns OpenAI, and other tech giants have turned up in some states offering legislative guidance, but Workday has been particularly active and influential, observers say. Civil liberties and workers’ rights advocates say they are alarmed to see Workday’s outsized state-level success, particularly since AI tools deployed in the workplace can have a huge impact on real people by determining who gets hired, fired and promoted. The company’s model legislation does nothing to protect consumers or employees, the advocates say, while giving employers and other private entities nearly unchecked power to set workplace and other AI norms with no independent audits. A top Workday lobbyist, Chandler Morse, has been interacting with state lawmakers for several months, and by his own account — given in late October testimony to Maryland lawmakers — had at that point held “active” discussions in at least five state capitals. Meanwhile, a second Workday executive presented the company’s vision to a large “multi-state working group” of legislators convened by Connecticut state Sen. James Maroney on March 1, a Workday spokesperson confirmed. A draft copy of Workday’s model bill, which is marked confidential but has apparently been shared widely with some state lawmakers and their aides, contains language closely resembling central elements of AI bills introduced in California on February 15; Illinois on February 8; Rhode Island on February 7; Connecticut on February 7; New York on January 8 and Washington state on January 8. A Workday spokesperson said the company did not lobby on AI governance issues in either Rhode Island or Illinois. October 25, 2023, hearing on AI by a Maryland General Assembly committee, including Chandler Morse, vice president for corporate affairs for Workday.
The Connecticut bill is still being refined, Maroney said, but, as with others, currently includes key language that matches the Workday model nearly verbatim. Maroney told Recorded Future News that he considered Workday’s ideas after executives there sent him their model bill. Maroney also was the lead sponsor for a dominant model of industry-friendly data privacy legislation passed in Connecticut and later copied in many other states, partially thanks to Maroney’s stewardship over the same “multi-state working group” focused on harmonizing approaches between states, an effort he is replicating for AI legislation. Washington State Rep. Clyde Shavers said he collaborated with Workday and a variety of other tech companies on his bill, and also spoke to a California legislator who just introduced a bill shaped with Workday’s support. “It's important to collaborate among legislatures so that we don't have opposing or discrete legal regimes that may be difficult or challenging for entities dealing with artificial intelligence to really fully comply with,” Shavers said in an interview. None of the other relevant lawmakers responded to a request for comment. Despite Morse’s outreach in Maryland, no bill has been introduced there. Workers’ rights and civil liberties advocates say “automated decision tools,” or software that uses artificial intelligence to make decisions, can be dangerous if not aggressively regulated. Workday is at the center of a current court case that underscores why advocates are concerned. The company is being sued by Derek Mobley, a Black man over age 40 who claims he has been rejected by more than 100 jobs he applied for using Workday’s AI-based hiring software. Mobley alleges that Workday’s automated system is “much more likely to deny applicants who are African-American, suffer from disabilities and/or are over the age of 40,” according to an amended complaint for the class action lawsuit against the company filed in a California federal court last month. (A Workday spokesperson said the lawsuit “is without merit”). Advocates for workers say cases like Mobley’s highlight why AI regulation needs to be very carefully written. And they point to several examples of language in the state bills that parallel the Workday model, which they say will leave workers unprotected. Particularly worrisome, they say, is a line borrowed from Workday’s template that appears with very slight wording differences in at least six states’ legislation, suggesting that the bill’s regulations would only apply to an automated decision tool that has been “specifically developed and marketed to, or specifically modified to be the controlling factor in making a consequential decision.” The problem, critics say, is that without independent audits, no company is going to admit to using automated tools as a controlling, or dominant, factor in employment decisions. If a company isn’t using the technology as a controlling factor when making decisions then, under Workday’s language, regulations don’t apply. “It creates a loophole that swallows the entire law,” said Matt Scherer, senior policy counsel for workers' rights and technology at the Center for Democracy and Technology, an advocacy group focused on digital rights. The language, paired with the lack of audits, “makes it much harder for someone to challenge when a deployer is saying that the AI system is exempt from the law,” said Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation, an advocacy group focused on defending civil liberties in a digital world. In fact, she said, as long as an employer states regulations don’t apply because they aren’t using workplace AI as a “controlling factor,” the employer doesn’t even need to tell workers an AI tool has been deployed at all under the spate of new laws inspired by Workday’s widely circulated model. The Workday model, and the state bills it has inspired, also don’t give workers the right to sue if they feel AI has harmed them, said Cody Venzke, senior policy counsel at the ACLU. “Most people expect that if they're discriminated against in employment that they can sue,” Venzke said. A Workday spokesperson said the company’s interest in these issues isn’t new. It has been “helping lay the groundwork for AI regulation since 2019,” the spokesperson said in a prepared statement, adding that Workday is committed to both “responsible AI and enabling innovation.” “We believe it is important to differentiate between those decisions in which the AI is a controlling factor from those in which an informed human is in the loop and leveraging AI-driven insights, remaining in control, and is accountable for a final decision,” the statement said. When asked how the public can hold corporations accountable without independent audits, the Workday spokesman said impact assessments, which appear instead of audits in the Workday model legislation as well as in the state laws referenced above, are a “tried and tested tool.” The Workday spokesperson pointed to the fact that the California law just introduced with the company’s support would mandate the state attorney general and Civil Rights Department to oversee compliance. Critics say state attorneys general and other public offices lack the bandwidth and, often the expertise, to effectively police the use of AI in every workplace in a given state. Impact assessments will not be effective without independent review, they say. “The distinction is that an audit is conducted by a theoretically impartial third party, whereas under these bills in California, Connecticut and other states, impact assessments can be done by the company that is deploying or developing the AI system — with all of the potential conflicts of interest that such self-assessment creates,” Scherer said. Many of the bills under consideration also contain loopholes that will allow companies to avoid providing consumers and workers with notice about how the technology will be used, Scherer said, calling it “the overarching No. 1 problem with all of these bills.” Workday, which earned $6.2 billion in revenue in fiscal 2023 and is used by more than half of Fortune 500 companies, has been urging state lawmakers to focus on “harmonizing” state bills, Morse told Maryland lawmakers in October. Asked about Morse’s drumbeat of state lobbying in recent months, the Workday spokesperson said in a statement that the company shares “our expertise with policymakers to support in the development of responsible AI guardrails.” Workday lobbyists have even earned hat tips from state legislators unveiling bills. When California Assembly Member Rebecca Bauer-Kahan introduced her version of automated decision tools AI legislation last month, she quoted Morse in a press release announcing the bill. In the release, Morse hailed Bauer-Kahan’s legislation and said Workday was “pleased to have contributed to its development.” A Microsoft executive also was quoted offering support for the bill in the release. Morse has urged other lawmakers to follow Bauer-Kahan’s lead and act quickly. Settling on a common model “as different jurisdictions start to think about how to move forward is going to be key so that this regulatory strategy doesn't collapse under its own weight,” Morse told Maryland lawmakers in October. At an AI hearing in Washington state in December a Workday lobbyist, Jarrell Cook, joined executives from Microsoft and Salesforce in testifying about how to consider AI regulation. Cook told Washington lawmakers the company has spoken with state and federal lawmakers and also advocated for its vision for AI regulation on the international stage. “Workday has prioritized working with policymakers that are seeking to regulate AI and align with the vision that we have, making sure that it's deployed and developed responsibly,” Cook said. “We know that most lawmakers are not running for office thinking about AI as often as we do.”The perils of ‘automated decision tools’
In the game since 2019
A heavyweight gains traction
Get more insights with the
Recorded Future
Intelligence Cloud.
Tags
No previous article
No new articles
Suzanne Smalley
is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.