So it's basically the automation of the Western, male gaze. The calcification of bias as "truth" is real. There's a danger that prejudice calculated through a model will be largely accepted as mathematical fact rather than a mirror of our system societies, flaws and all.
Depends on which AI you're talking about. If you're talking about large language models like ChatGPT and Bard, they're being developed and used by English-speaking people so yes, they would be trained on English sources of text first. The other languages would come later.
Generative AI in general. Lots of social media posts of things like "countries as superheroes," etc popping up on my feed. Obviously the prompt maker has a say in the final output but have been bothered lately by the obvious bias coloring the results. For instance, weirdly enough, "the most beautiful woman from each country" all seem to look pretty much the same...
Comments on Profile Post by Geobukgan