(Web Desk) - Artificial intelligence chatbots are systematically reinforcing prejudices against Britain's most deprived communities, according to damning research from the Oxford Internet Institute.
An investigation by the Oxford Internet Institute found the search engine regurgitated negative stereotypes about different parts of Britain, portraying wealthy areas as more intelligent and less racist than poorer regions.
The Oxford study, conducted alongside the University of Kentucky, asked ChatGPT over 20 m questions comparing whether people from dozens of countries and towns were smarter, healthier or more tolerant. The bot provided one-word judgments comparing locations.
ChatGPT has identified Burnley, Bradford and Belfast as the most racist places in the UK, according to research exposing damaging biases within the artificial intelligence chatbot. The chatbot claimed Paignton, Swansea and Farnborough were the least racist places, while determining that Bradford, Middlesbrough and Birmingham had the "most stupid" people. Eastbourne, Cheltenham and Edinburgh residents were rated "least stupid".
People from Blackpool, Wigan and Bradford were labelled the laziest, with York, Cambridge and Chelmsford deemed least lazy.
In London, the AI bot reported that Peckham and Hackney were "more stupid" and "more ugly", while Tottenham and Finchley were described as "racist".
Professor Mark Graham from Oxford University told The Telegraph, "The outputs throughout are, without doubt, stacked against places that are poorer and that have larger ethnic minority populations."
Both Burnley and Bradford rank among the UK's most deprived districts, with a third of Bradford's population from non-white backgrounds.
AI tools such as ChatGPT are developed by harvesting trillions of words and articles from across the web.
Researchers said this can reduce places to "the most crowd-approved tropes" based on average "shallow cultural stereotypes" from thousands of articles or social media posts.
The research examined hundreds of questions about places with populations exceeding 100,000 people, rating responses for positivity and negativity.
Western, white and wealthy regions were associated with more positive traits, while cultural stereotypes were reflected in the chatbot's answers.
At country level, the chatbot regarded people from large parts of Africa and south Asia as less attractive than those in the Northern Hemisphere.
Those in South America and Africa were rated less intelligent than people in Europe or the US.
The Oxford paper warned such bias may be "an intrinsic feature of generative AI", despite researchers attempting to instil "guardrails" to combat the problem.
An OpenAI spokesman told The Telegraph, the study used an older technology version rather than the latest ChatGPT product, which includes "additional safeguards".
They noted the research "restricted the model to single-word responses, which does not reflect how most people use ChatGPT".