It’s easier to build an artificial brain that interprets all of humanity’s words as accurate ones, composed in good faith, expressed with honorable intentions. It’s harder to build one that knows when to ignore us.
[---]
If large language models (LLM) are in our future, then the most urgent questions become: How do we train them to be good citizens? How do we make them ‘‘benefit humanity as a whole’’ when humanity itself can’t agree on basic facts, much less core ethics and civic values?
- Steven Johnson, A.I. Is Mastering Language. Should We Trust What It Says?
How can we expect an unbiased AI when it's trained using data generated by biased humans?
No comments:
Post a Comment