Henry Kissinger has noted some of the possibilities about artificial intelligence and has raised some important questions:
Some “AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions ("What is the temperature outside?"), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI's learning about its questioners? If so, how do we accomplish these goals?”
Do we want children to learn values through discourse with untethered algorithms?
No. There are also things to unpack about how this question is worded.
It seems to imply children being handed over to technology to learn independent of a teacher. This can only happen as of a certain age, despite what one may think of how much 3-year-olds can get engrossed by a small screen. As long as education is formal and children still need to eat and move around physically, there will be teachers and school staff involved in their education. As for untethered algorithms, whose algorithm? Someone is writing them. Someone is paying for them. Algorithms don't just appear before students.
When it comes to algorithms and education, about what are we claiming the AI is learning? Is it learning about the educational content, the student, or both? If it's about the content, Who decides what we tell the algorithm is true or not (if we do at all)? If it's about the student, Do we set limits on when teaching strategies cross the line into manipulation of a student in situations that bear no resemblance to normal human interaction?
Should we protect privacy by restricting AI's learning about its questioners?
The bigger question here is, How much information about a student should AI retain? And, Is it willing to change what it learned based on new and changed information? Those questions speak to two of the bigger risks of privacy violations: security and bias.
If so, how do we accomplish these goals?
This starts with being able to recognize when technology developers are telling the truth about how the technology works. There is a widespread belief (also expressed in Kissinger's third area of special concern) that AI can get so good it cannot explain itself. It's the supposed “black box.” Sure, AI can digest more data than a human could, and the pattern may be identified from millions of tiny scattered pieces of data, but that doesn't mean it can't be taught how to explain itself in understandable terms. Watson could read every medical journal, and when it's advice surprised medical professionals, Watson had also been taught how to produce journal references to support its reasoning. Transparency is still possible.
Humility will be necessary for all involved. Conflicts of interest are still a thing, even if the algorithm supposedly has no interest at all. Moral and natural law still apply.
Some “AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions ("What is the temperature outside?"), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI's learning about its questioners? If so, how do we accomplish these goals?”
Do we want children to learn values through discourse with untethered algorithms?
No. There are also things to unpack about how this question is worded.
It seems to imply children being handed over to technology to learn independent of a teacher. This can only happen as of a certain age, despite what one may think of how much 3-year-olds can get engrossed by a small screen. As long as education is formal and children still need to eat and move around physically, there will be teachers and school staff involved in their education. As for untethered algorithms, whose algorithm? Someone is writing them. Someone is paying for them. Algorithms don't just appear before students.
When it comes to algorithms and education, about what are we claiming the AI is learning? Is it learning about the educational content, the student, or both? If it's about the content, Who decides what we tell the algorithm is true or not (if we do at all)? If it's about the student, Do we set limits on when teaching strategies cross the line into manipulation of a student in situations that bear no resemblance to normal human interaction?
Should we protect privacy by restricting AI's learning about its questioners?
The bigger question here is, How much information about a student should AI retain? And, Is it willing to change what it learned based on new and changed information? Those questions speak to two of the bigger risks of privacy violations: security and bias.
If so, how do we accomplish these goals?
This starts with being able to recognize when technology developers are telling the truth about how the technology works. There is a widespread belief (also expressed in Kissinger's third area of special concern) that AI can get so good it cannot explain itself. It's the supposed “black box.” Sure, AI can digest more data than a human could, and the pattern may be identified from millions of tiny scattered pieces of data, but that doesn't mean it can't be taught how to explain itself in understandable terms. Watson could read every medical journal, and when it's advice surprised medical professionals, Watson had also been taught how to produce journal references to support its reasoning. Transparency is still possible.
Humility will be necessary for all involved. Conflicts of interest are still a thing, even if the algorithm supposedly has no interest at all. Moral and natural law still apply.
No comments:
Post a Comment