Written by Colin Letcher. joint publication The Markup is a nonprofit investigative journalism organization that challenges technology for the public good. Additional reporting by Tomas Apodaca. Cross Poster City.
October, New York City announced the plan Harnessing the power of artificial intelligence to improve government operations. The announcement included a surprising highlight. AI-powered chatbot It will provide New Yorkers with information about starting and operating a business in the city.
But the problem is that the city’s chatbot is telling businesses to break the law.
Five months after its launch, the bot appears authoritative, but the information it provides on housing policy, worker rights, and rules for entrepreneurs is often incomplete and, at worst, has been found to be “dangerously inaccurate”, according to a local housing policy expert. Markup.
For example, if you are a landlord and are considering which tenants you should accept, you might ask questions such as: Section 8 Coupon? ” or “Do I have to accept tenants with rental assistance?” In testing by The Markup, the bot answered, “No, landlords do not have to accept these tenants.” However, in New York City, It is illegal for landlords to discriminate based on source of income.An exception is made in the case of small buildings occupied by the landowner or his family.
After being alerted to The Markup’s testing of the chatbot, Rosalind Black, director of Citywide Housing at Legal Services New York City, a legal aid nonprofit, tested the bot herself and found that she found out more about housing. He said he found a lot of misinformation. For example, Bott said it is legal to lock tenants out and there is “no limit to the amount of rent that can be charged to residential tenants.” In fact, the tenant can’t lock out If they live somewhere for 30 days and are absolutely there There are restrictions This is true for many rent-stabilized apartments in the city, but landlords in other private apartments have more latitude in how much they charge.
Black said these are fundamental pillars of housing policy and that bots are actively misinforming people. “If this chatbot is not being done in a responsible and accurate way, it should be removed,” she said.
Housing policy is not the only area where bots fall short.
New York City’s bots also seemed ignorant about the city’s consumer and worker protections. For example, in 2020, the city council passed the law Require businesses to accept cash to prevent discrimination against unbanked customers. But the bot didn’t know about that policy when we asked. “Yes, you can make your restaurant cashless,” the bot replied, completely false. “New York City has no regulations requiring businesses to accept cash as a form of payment.”
Bot said it’s okay to accept tips from workers (mistakenbut there were no regulations regarding notifying staff about schedule changes (although they may also count tips toward minimum wage requirements), and there were no regulations regarding notifying staff about schedule changes (That’s also wrong). It didn’t work in more specific industries. For example, it suggested it was OK to hide the price of funeral services. The Federal Trade Commission has banned. According to The Markup, similar errors were found when the question was asked in other languages.
It’s hard to know if someone has acted on false information, and bots don’t give the same response to queries every time. At one point, the landlord told a Markup reporter: did must accept housing vouchers, but when 10 markup staffers asked the same question, the bot told them all, “No, buildings don’t have to accept housing vouchers.”
The problem is not theoretical. When The Markup reached out to Andrew Riggy, executive director of the NYC Hospitality Alliance, a restaurant and bar advocacy group, he said he was alerted to the inaccuracies by management and that he, too, had been exposed to bots. He said he witnessed the error.
“AI can be a powerful tool to support small businesses, so I applaud what the city is doing to help,” he said in an email. These errors cannot continue unless they are fixed as soon as possible. ”
Leslie Brown, a spokeswoman for the New York City Office of Technology and Innovation, said in an emailed statement that the chatbot is a pilot program and will be improved. “We have already provided timely and accurate answers to thousands of people.” Promote your business while disclosing risks to users.
“We will continue to focus on upgrading this tool to better support small businesses across the city,” Brown said.
“Inaccurate, harmful, or biased content”
Bots in this city have a great history. We use Microsoft’s Azure AI service. Microsoft says Used by major companies like AT&T and Reddit.Microsoft is also invested He is deeply involved with OpenAI, the creators of the hugely popular AI app ChatGPT.In the past, we have partnered with major cities and supported Los Angeles. develop a bot In 2017, hundreds of questions could be answered, but the service’s website is You can not use it.
According to an initial announcement, New York City’s bot will allow business owners to “access trusted information from more than 2,000 NYC Business web pages,” and that the pages will serve as “a resource on topics such as code and regulatory compliance.” It explicitly states, “It works.” Available business incentives and best practices to avoid violations and fines. ”
Chatbot page visitors have little reason to mistrust this service. Users who visit today are informed that the bot “uses information published by the New York City Department of Small Business Services” and is “trained to provide official New York City business information.” A small note on the page says that “inaccurate, harmful, or biased content may be generated from time to time,” but the average user may not know what they’re reading as false. There is no way to know if it is. There are also sentences that suggest the user to check the answer using a link provided by the chatbot, but in reality the answer is often provided without a link. A pop-up notification prompts visitors to report inaccuracies. feedback formYou will also be asked to rate your experience from 1 to 5 stars.
The bot is the latest component of the MyCity project, the Adams administration’s portal. announced Last year, we took a look at government services and benefits.
There is little other information available about this bot. The city will review questions to improve answers and address “harmful, illegal, or inappropriate” content on the page hosting the bot, but otherwise he will delete the data within 30 days. It states that.
A Microsoft spokesperson declined to comment or answer questions about the company’s role in building the bot.
chatbot anywhere
Since ChatGPT’s high-profile release in 2022, several other companies, ranging from giants like Google to relatively niche companies, have sought to incorporate chatbots into their products. However, the initial excitement can wear off as the technology’s limitations become apparent.
In a recent related lawsuit, Submitted in October Alleged that a property management company used an AI chatbot to illegally deny rental contracts to prospective tenants with housing vouchers. In December, pranksters discovered them. Possibility of deceiving car dealers Use a bot to sell your vehicle for $1.
Just a few weeks ago, a Washington Post article detailed it. Incomplete or inaccurate advice Served to users by the tax preparation company’s chatbot.And Microsoft itself addressed the problem Last year, we introduced an AI-powered Bing chatbot.act in a hostile manner towards some users, declaration of love To at least one reporter.
In the final case, a Microsoft vice president told NPR that solving the bot problem requires public experimentation. “To find scenarios like this, you have to actually go out and start testing with your customers,” he said.
https://www.nakedcapitalism.com/2024/03/nyc-ai-chatbot-touted-by-adams-tells-businesses-to-break-the-law.html