scorecardresearch Skip to main content

Mass. lawmakers scramble to regulate AI amid rising concerns

Amid concerns about the technology’s risks, Massachusetts lawmakers have already drafted bills aimed at ensuring that AI systems include strong privacy protections, and that they’re purged of biases that could lead to racial, religious, or gender discrimination.Lane Turner/Globe Staff

Congress held hearings this week about how to regulate artificial intelligence systems, but it’s playing catch-up.

Amid concerns about the technology’s risks, Massachusetts lawmakers have already drafted bills aimed at ensuring that AI systems include strong privacy protections and that they’re purged of biases that could lead to racial, religious, or gender discrimination.

And Massachusetts has plenty of company. It’s one of nearly two dozen states that aren’t just waiting for the federal government to act. Instead, they’re moving quickly to snuff out possible AI abuses before the technology becomes embedded into nearly every facet of daily life.

“We made the mistake to not get ahead of the curve with Facebook and other social media,” said Massachusetts state Senator Barry Finegold, who added that lawmakers missed their chance to limit social media usage by children and teens. “It is way too dangerous not to get ahead of the curve on artificial intelligence.”

Advertisement



Finegold has filed a bill that would set performance standards for powerful “generative” AI systems like ChatGPT, which can create original images, music, and text in response to prompts created by humans. Companies that make such systems, including Google, Microsoft, and OpenAI, would have to register with the Massachusetts attorney general and provide information on how their systems operate and how they collect and store data.

Companies would have to get someone’s permission to use his or her personal information to train the system. They’d be required to install strong security to protect the data and to delete sensitive data on request. AI systems capable of generating original texts would have to include an indelible digital “watermark” proving that the copy was written by a computer. And companies would need to make sure that AI systems aren’t used to discriminate against individuals or groups based on race, sex, gender, or other characteristics protected under antidiscrimination law. The attorney general would be authorized to file lawsuits against makers of AI systems that violate the statute.

Advertisement



Finegold said that he and his staff used ChatGPT to generate the bill and that the resulting text needed only minor alterations.

“It did a pretty good job,” he said. “That’s what scary about this stuff.”

Another bill, filed in both the House and Senate, would set up a commission to monitor the state government’s own use of AI systems. State Senator Michael Moore, who chairs the Joint Committee on Advanced Information Technology, the Internet and Cybersecurity, said state officials lack a comprehensive inventory of AI systems used by state agencies.

“We need to know who is using what technology,” said Moore. “We should be able to find out from them, are any of your agencies using AI? They should be able to tell us.”

A third bill, filed by state Representative Josh Cutler, would address the growing use of AI chatbots to provide mental health services. Under the bill, doctors and counselors could only use AI tools that have been approved by a professional licensing board. They’d have to get a patient’s permission to use them and the treatment would have to be monitored by a human for safety and effectiveness.

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT in March in Boston. Michael Dwyer/Associated Press

Like Finegold, Cutler used ChatGPT to draw up the bill.

Cutler said he knew of no problems caused by AI counseling systems so far. “This is more heading off a problem,” he said. “Creating some guardrails.”

Advertisement



In addition, state lawmakers are taking up data privacy bills that would set tougher limits on how online companies can make use of customer data. While not specifically focused on AI services, these bills would restrict the ability of such services to collect information from users.

Suresh Venkatasubramanian, a professor of computer science at Brown University, said that still more regulation will be needed. For example, he called for legislation requiring that AI systems must actually perform as promised.

Venkatasubramanian cited the AI-based self-driving features in Tesla automobiles, which promise full autonomy but can’t safely operate the vehicle under all driving conditions. He thinks it should be illegal to make such claims about AI systems without proof. “They should be safe,” Venkatasubramanian said. “They shouldn’t harm us. And they should be effective. They should do what they claim to do.”

Venkatasubramanian also called for a law requiring that any AI system must always be backed up by humans who can correct errors or detect biased outcomes. And he said that AI systems should be required to explain themselves. Today’s systems usually don’t reveal how they reach their conclusions. To Venkatasubramanian, that’s unacceptable.

“If you don’t know how your system works, why are you putting it in a place where it affects me?” he said. “How do you know it works well?”

Even as Massachusetts lawmakers take up their proposals, they’re hoping to offload the responsibility to Congress. Moore, Finegold, and Venkatasubramanian all said that AI regulation should be embodied in federal law, rather than a patchwork of statutes enacted by the 50 states.

Advertisement



They may get their wish. On Thursday, two days after Sam Altman, chief executive of OpenAI, called for the establishment of a federal regulatory agency to oversee AI systems, Colorado Democratic Senator Michael Bennet filed a bill to create such an agency.

In addition, Oregon Democratic Senator Ron Wyden plans to introduce a bill that would require companies to create regular assessments of the AI systems they use or sell. Companies would be required to notify people about the use of AI in the products and services they sell. And the Federal Trade Commission would be given authority to enforce the law.

But Moore said that in a sharply divided Congress, federal action might take too long. “We’ve all seen how stagnant some things have been in Washington,” he said. “I don’t think we should wait.”


Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him @GlobeTechLab.