The city of New York is facing criticism for a chatbot created to help small business owners that is dispensing incorrect, harmful, and biased advice. Despite the issues being reported by tech news outlet The Markup, the city has chosen to keep the AI-powered tool on its official website. Mayor Eric Adams defended the decision, admitting that the chatbot’s answers are wrong in some areas. Launched in October as a resource for business owners, the chatbot offers algorithmically generated responses to questions about navigating the city’s bureaucracy, with a disclaimer that the information may be incorrect or biased.

The chatbot continues to provide false guidance, including suggesting it is legal for an employer to fire a worker for certain reasons and contradicting the city’s waste initiatives. At times, the bot’s answers veer into the absurd, such as advising a restaurant to serve cheese nibbled on by a rodent. Despite these issues, Microsoft, which powers the bot through its Azure AI services, is working with city employees to improve the service. Critics warn of the dangers of government AI-powered systems without proper oversight and guardrails, highlighting the need for responsible AI implementation.

Mayor Adams defended the chatbot by suggesting that finding issues is part of ironing out kinks in new technology, emphasizing the need for continued improvement. However, experts caution against this approach, labeling it as reckless and irresponsible. Concerns have been raised about the accuracy and logic of large language models like the one used in the chatbot, citing previous incidents where chatbots have given out incorrect advice. The public sector needs to consider the potential damage that incorrect information from AI-powered systems can cause, especially when promoted by the government.

The pitfalls of New York’s chatbot serve as a cautionary tale for other cities considering implementing AI-powered chatbots. Experts recommend limited inputs to reduce misinformation and careful curation of content to ensure accuracy. The director of the Center for Technological Responsibility at Brown University suggests that cities need to clearly define the problem they are trying to solve with chatbots and consider the consequences of replacing human interaction with AI. The chatbot’s inaccuracies highlight the importance of accountability and responsible AI implementation in government systems.

The use of chatbots by government entities raises questions about trust, accountability, and potential harm caused by incorrect information. Private companies have also faced criticism for deploying chatbots that provide inaccurate advice, emphasizing the need for thorough testing and oversight in AI systems. New York’s chatbot debacle underscores the challenges of incorporating AI into public services and the importance of ensuring that these systems are accurate, ethical, and responsible. As technology continues to advance, governments must prioritize the safety and well-being of their citizens when implementing AI-powered systems.

Share.
Exit mobile version