The prevailing view of technology in our culture is, If technology can do something, then it should do something. The means justify the ends, and woe be to those who get in the way.
Some people inaccurately assume technology has no bias. For some, there is no logical leap from there, to concluding that if technology is not bad, it must be good.
And then there are the biggest cheerleaders of technology who love to loudly proclaim that if we let technology get big and powerful enough, it will solve all our problems. Utopia! (Social media was going to bring us all together and usher in world peace, remember?) Some are so blind to their own tyranny that they want to criminalize anyone who questions or interferes with this supposed march of progress. Therein, of course, lies the first clue that not all is as good as they would have it seem.
The rest of the world exists outside technology, and technology will never replace the physical world of time and space. As such, there is natural and moral law in which technology must exist and comply.
The purposes of government are to punish those who do evil (violate moral law) and to praise those who do good. Technologists would have us believe their work, the tools they build are good and do good; therefore, their only due from government is praise and thanks. The problem is, technology can be used for evil, and be designed for evil. Therefore, to any extent that there is evil to be punished in how technology is built or used, government has the God-ordained right to probe for and discover that evil so it can punish those who did that evil.
Before looking at how this applies to AI, there are two challenges that would be helpful to address.
First, because technologies can have multiple purposes, some of which can be evil, there is a danger that a technology could be considered inherently evil because one of its purposes can be evil. For instance, the gun control lobby wants to hold gun manufacturers accountable for mass shootings. Unjustified shootings of anyone are indeed evil, and that is also not the only possible use of a gun. There are many legitimate uses of a gun including self-defense, survival, hunting, competition, and target practice. Therefore, to hold gun manufacturers accountable for mass shootings is to deny any other legitimate use of a firearm.
Second, to the extent that evil in technology is not readily seen and can be hidden, this may prompt some to argue for an aggressive regulatory state that would preemptively monitor technology development for evil activity. The problem is this creates a surveillance state that would invade people's privacy and interfere with legitimate technology development, experimentation, and scientific inquiry.
Those things understood, it is possible that evil could be occurring or embedded under the surface of sophisticated technology, or result from how technology is designed or used, and artificial intelligence is no exception to this. To the extent that such evil is manifested with visible consequences, then government is in its full right of authority to understand all that was used in pursuing the motive, intent, and opportunity to commit the act of evil.
That brings us to AI. Henry Kissinger has asked, “Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of out-thinking and potentially outmaneuvering them?” I've asked similar questions about driverless cars. I don't have all the answers, but I also know answers I would not accept.
I would not accept the black box non-explanation for the actions of AI. It's not that mysterious. If we can write an algorithm that can match patterns with lots of little bits of data, then we can also write an algorithm that can show us at least a sampling of the little bits of data it used and the conclusions it drew, along with how it came to those conclusions.
We don't have to wait to see how this is going to work. While application of AI to vehicle driving is still in development, the beginning stages of applying AI to medicine is in progress in the form of data collection. Electronic health records collecting the data for those artificial intelligence algorithms are causing malpractice claims to rise. I would be slower than Kissinger to claim “Artificial intelligence will in time bring extraordinary benefits to medical science.” Technologists are currently outmaneuvering medical professionals in terms of legally requiring “meaningful use” of this technology before it has been given time to adequately mature.
Kissinger asks, “What is the role of ethics in this process, which consists in essence of the acceleration of choices?” Just as man is uniquely human because of something outside himself, so too ethics is a matter of moral law outside himself. Morality is not based on a majority.
Kissinger notes, “Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI's mechanisms or being overawed by its capacities.” Technologists and the scientific intelligentsia are happy to leave that mystique in place, too. It preserves their power and income-earning potential. The most important thing anyone, especially in a position with responsible authority, can do is to never allow themselves to be “overawed by its capacities.” That's when vigilance dies.
Kissinger concludes “governance, insofar as it deals with the subject, is more likely to investigate AI's applications for security and intelligence than to explore the transformation of the human condition that it has begun to produce.”
The role of government with respect to artificial intelligence, technology, and anyone it governs is ultimately not about security, intelligence, the human condition, or impressive capacities of the technology. The role of government with respect to artificial intelligence is as it is with everything else: to punish those who do evil, and to praise those who do good.
Moving…
All content on this blog from Tim McGhee has moved to the Tim McGhee Substack, and soon, Lord willing, will be found only on that Substack.
Wednesday, July 31, 2019
Subscribe to:
Post Comments (Atom)
Blog Archive
-
▼
2019
(371)
-
▼
July
(32)
- Governing AI
- A teacher with no limits?
- The objectives of AI
- Two categories of productivity
- Rewriting the search of history
- Congress Updates
- The story of America
- Ready for self-programming AI
- 2-year, 2-page budget deal agreement reached
- The meaning not found in numbers
- The key to getting things done
- The reasons for the Church gathering
- Congress Updates
- Using AI to encourage self-censorship of abusive c...
- History is no judge
- The Future of Value, Generalist Edition
- A Civic Biology
- Maximize your unique contribution
- Making it home
- Congress Updates
- The Billy Graham Rule
- Evangelical Support for President Trump
- Enhancing his earthly joys
- How human nature is constructed
- A career of usefulness
- Only God has heart knowledge
- Congress Updates
- Internet access = later nights
- Not Afraid of Poverty
- Understanding Appreciation
- The last and final and most precious reward
- Totally destroy the Johnson Amendment
-
▼
July
(32)
No comments:
Post a Comment