Getting our machines to be politically correct

Everyone knows that society chooses to believe things which are manifestly not true. Like, for instance, various false ideas of race or ethnicity that we pretend to believe to keep the public peace. Yes, the only valid explanation of why 11% of the US population commits 56% of murders is the legacy of racism. While at the same time race is a social construct and has no significance. You know the drill. We are adept at believing one thing in some circumstances while adapting our behaviour so as to keep alive. Double think.

 

Thus I notice the embarrassment of AI developers when they cannot prevent their AI machines from saying or writing politically incorrect things. The Wall Street Journal reports today (Google Launches Bard AI Chatbot to Counter ChatGPT) that

“Bard comes with a disclaimer at the bottom of the site that reads: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views…..

Google executives said Bard would sometimes also produce inaccurate or fabricated information, a problem common to large AI models that researchers refer to as “hallucination.” As one example, Google said Bard provided the incorrect scientific name for the ZZ plant when asked for examples of easy indoor plants. It also said large AI models could sometimes replicate biases and stereotypes present in the physical world.”

Watch the cascade of euphemisms here. “Biases and stereotypes present in the physical world”. What is a bias present in  the physical world? A mental approach – which is what a bias is – confirmed by the facts of the physical world?  What is a stereotype?  An attribute that applies to the an object,  person or phenomenon that may not endure but is provisionally valid.

In short, the authors of AI have not yet managed to cause their AI to lie successfully or persuasively about certain facts, persons or phenomena. So Google issues a legal warning ““Bard may display inaccurate or offensive information that doesn’t represent Google’s views.”

I am reminded of the problem of Egyptian water clocks. The ancient Egyptians believed that time ran slower in the hot hours of midday. Getting their water clocks to flow more slowly during the hot hours of midday was a considerable technical challenge. Notice that technology in both cases, AI and water clocks, is made to conform to pre-existing cultural ideas.

 

 

Dalwhinnie

From Count Steiermark:

one cannot make it up:

reddit taken down for hours thanks to Google’s woke coding:

The nodeSelector and peerSelector for the route reflectors target the label `node-role.kubernetes.io/master`. In the 1.20 series, Kubernetes changed its terminology from “master” to “control-plane.” And in 1.24, they removed references to “master,” even from running clusters. This is the cause of our outage. Kubernetes node labels.

https://www.reddit.com/r/RedditEng/comments/11xx5o0/you_broke_reddit_the_piday_outage/