These debates — between those who seek a president who models ethical virtue, and those who would regard that desire as misguided at best — are likely to continue. One thing that must be acknowledged, however, is that even the best defenses of presidential vice cannot be taken to excuse all forms of moral failure. Machiavelli, and those who follow him, can at most be used to defend a president whose vices are effectively able to create a more ethical world for others.
Not all sorts of wrongdoing, though, can plausibly be thought to have these effects. Some vices, such as an outsized confidence, or the will to use violence in the name of justice, may be defended with reference to the ideas of Machiavelli or Walzer. Other ethical failings, however — such as a vindictive desire to punish perceived enemies — often seem less likely to lead to good results. This sort of failure, however, appears to be common among those who have sought the presidency.
It is a failure, moreover, that does not depend upon party affiliation. In recent years, for example, both Lyndon Baines Johnson and Richard Nixon took particular delight in humiliating and degrading their political adversaries. Both, perhaps, might have been better leaders, had they been more reflective about when and how to wrong. In presidential politics, all parties might at least agree on this much: If there is sometimes a reason to seek an ethically flawed president, it does not follow that all ethical flaws are equally worth defending.
A contemporary Robinsonade — York, York. Edition: Available editions United Kingdom. Michael Blake , University of Washington. Character and democracy Voters disagree about the extent to which the president must demonstrate moral leadership. Virtue, vice and the presidency These ideas have, of course, been a part of many long-standing debates about presidential morality. Some evangelicals have used the Biblical story of Cyrus the Great, to explain their continued support for the President Trump.
Effective vice These debates — between those who seek a president who models ethical virtue, and those who would regard that desire as misguided at best — are likely to continue. Scholars who study political ethics disagree as well. Those who insist that the president must be virtuous often begin with the thought that a person in that office will face new and unanticipated problems during his or her term.
A president whose decision-making is informed by a consistent character, will, in the face of new challenges, rely upon the lessons that have built that character. Abraham Lincoln, for instance, consistently and publicly referred to the same set of moral values throughout his life — values centered on a deep, while imperfect, belief in the moral equality of people. These principles provided him with guidance throughout the horrors of the Civil War.
A president whose decisions are not grounded in the right sort of ethical values may be less well-equipped to respond well — and, more importantly, might be frighteningly unpredictable in his or her responses. Other political ethicists have emphasized the ways in which democracies can fall apart in the absence of personal virtue.
Conservative thinkers, in particular, have argued that political institutions can only function when all those who participate within them are capable of compromise and of self-government. If this is true of citizens, it is even more true of the president, whose opportunities to damage the system through unprincipled actions are so much greater. These arguments have been met with powerful objections.
The good leader, insisted Machiavelli, is morally right to do what is usually taken as wrong. He or she must be cruel, deceptive and often violent. The philosopher Arthur Applbaum refers to this as role morality. What a person is right to do, argues Applbaum, often depends upon the job that person is doing. The good lawyer, for instance, may have to bully, browbeat or humiliate hostile witnesses. That is what a zealous defense might require. Machiavelli notes simply that, in a hostile and brutal world, political leaders might have similar reasons to do what is usually forbidden.
Modern philosophers such as Michael Walzer have continued this line of reasoning. As was mentioned in the introduction above, information technologies are in a constant state of change and innovation. The internet technologies that have brought about so much social change were scarcely imaginable just decades before they appeared.
Even though we may not be able to foresee all possible future information technologies, it is important to try to imagine the changes we are likely to see in emerging technologies. James Moor argues that moral philosophers need to pay particular attention to emerging technologies and help influence the design of these technologies early on to encourage beneficial moral outcomes Moor The following sections contain some potential technological concerns.
An information technology has an interesting growth pattern that has been observed since the founding of the industry. Intel engineer Gordon E. Moore noticed that the number of components that could be installed on an integrated circuit doubled every year for a minimal economic cost and he thought it might continue that way for another decade or so from the time he noticed it in Moore History has shown his predictions were rather conservative. This doubling of speed and capabilities along with a halving of costs to produce it has roughly continued every eighteen months since and is likely to continue.
This phenomenon is not limited to computer chips and can also be found in many different forms of information technologies. The potential power of this accelerating change has captured the imagination of the noted inventor and futurist Ray Kurzweil. He has famously predicted that if this doubling of capabilities continues and more and more technologies become information technologies, then there will come a point in time where the change from one generation of information technology to the next will become so massive that it will change everything about what it means to be human.
If this is correct, there could be no more profound change to our moral values. For example Mary Midgley argues that the belief that science and technology will bring us immortality and bodily transcendence is based on pseudoscientific beliefs and a deep fear of death. In a similar vein Sullins argues that there is often a quasi-religious aspect to the acceptance of transhumanism that is committed to certain outcomes such as uploading of human consciousness into computers as a way to achieve immortality, and that the acceptance of the transhumanist hypothesis influences the values embedded in computer technologies, which can be dismissive or hostile to the human body.
Just because something grows exponentially for some time, does not mean that it will continue to do so forever… Floridi, While many ethical systems place a primary moral value on preserving and protecting nature and the natural given world, transhumanists do not see any intrinsic value in defining what is natural and what is not and consider arguments to preserve some perceived natural state of the human body as an unjustifiable obstacle to progress. Not all philosophers are critical of transhumanism, as an example Nick Bostrom of the Future of Humanity Institute at Oxford University argues that putting aside the feasibility argument, we must conclude that there are forms of posthumanism that would lead to long and worthwhile lives and that it would be overall a very good thing for humans to become posthuman if it is at all possible Bostrom, Artificial Intelligence AI refers to the many longstanding research projects directed at building information technologies that exhibit some or all aspects of human level intelligence and problem solving.
Artificial Life ALife is a project that is not as old as AI and is focused on developing information technologies and or synthetic biological technologies that exhibit life functions typically found only in biological entities. A more complete description of logic and AI can be found in the entry on logic and artificial intelligence. ALife essentially sees biology as a kind of naturally occurring information technology that may be reverse engineered and synthesized in other kinds of technologies.
Both AI and ALife are vast research projects that defy simple explanation. Instead the focus here is on the moral values that these technologies impact and the way some of these technologies are programmed to affect emotion and moral concern. In , he made the now famous claim that.
Constructing a morality of caring: codes and values in Australian carer discourse.
A description of the test and its implications to philosophy outside of moral values can be found here see entry on the Turing test. For example, Luciano Floridi a argues that while AI has been very successful as a means of augmenting our own intelligence, but as a branch of cognitive science interested in intelligence production, AI has been a dismal disappointment. The opposite opinion has also been argued and some claim that the Turing Test has already been passed or at least that programmers are on the verge of doing so.
For instance it was reported by the BBC in that the Turing Test had been passed by a program that could convince the judges that it was a 13 year old Ukrainian boy, but even so, many experts remain skeptical BBC Yale professor David Gelernter worries that that there would be certain uncomfortable moral issues raised. Gelernter suggests that consciousness is a requirement for moral agency and that we may treat anything without it in any way that we want without moral regard. Sullins counters this argument by noting that consciousness is not required for moral agency. For instance, nonhuman animals and the other living and nonliving things in our environment must be accorded certain moral rights, and indeed, any Turing capable AI would also have moral duties as well as rights, regardless of its status as a conscious being Sullins AI is certainly capable of creating machines that can converse effectively in simple ways with with human beings as evidenced by Apple Siri, Amazon Alexa, OK Goolge, etc.
But that may not matter when it comes to assessing the moral impact of these technologies. In addition, there are still many other applications that use AI technology. Nearly all of the information technologies we discussed above such as, search, computer games, data mining, malware filtering, robotics, etc.
Character and democracy
Thus AI will grow to be a primary location for the moral impacts of information technologies. Many governments and professional associations are now developing ethical guidelines and standards to help shape this important technology, on e good example is the Global Initiative on the Ethics of Intelligent and Autonomous Systems IEEE Artificial Life ALife is an outgrowth of AI and refers to the use of information technology to simulate or synthesize life functions.
The problem of defining life has been an interest in philosophy since its founding. See the entry on life for a look at the concept of life and its philosophical ramifications. If scientists and technologists were to succeed in discovering the necessary and sufficient conditions for life and then successfully synthesize it in a machine or through synthetic biology, then we would be treading on territory that has significant moral impact.
Mark Bedau has been tracing the philosophical implications of ALife for some time now and argues that there are two distinct forms of ALife and each would thus have different moral effects if and when we succeed in realizing these separate research agendas Bedau ; Bedau and Parke One form of ALife is completely computational and is in fact the earliest form of ALife studied. ALife is inspired by the work of the mathematician John von Neumann on self-replicating cellular automata, which von Neumann believed would lead to a computational understanding of biology and the life sciences Artificial Life programs are quite different from AI programs.
Where AI is intent on creating or enhancing intelligence, ALife is content with very simple minded programs that display life functions rather than intelligence. The primary moral concern here is that these programs are designed to self-reproduce and in that way resemble computer viruses and indeed successful ALife programs could become as malware vectors. The second form of ALife is much more morally charged. This form of ALife is based on manipulating actual biological and biochemical processes in such a way as to produce novel life forms not seen in nature.
Scientists at the J. While media paid attention to this breakthrough, they tended to focus on the potential ethical and social impacts of the creation of artificial bacteria. Craig Venter himself launched a public relations campaign trying to steer the conversation about issues relating to creating life.
This first episode in the synthesis of life gives us a taste of the excitement and controversy that will be generated when more viable and robust artificial protocells are synthesized. The ethical concerns raised by Wet ALife, as this kind of research is called, are more properly the jurisdiction of bioethics see entry on theory and bioethics.
But it does have some concern for us here in that Wet ALife is part of the process of turning theories from the life sciences into information technologies. This will tend to blur the boundaries between bioethics and information ethics. Just as software ALife might lead to dangerous malware, so too might Wet ALife lead to dangerous bacteria or other disease agents. Critics suggest that there are strong moral arguments against pursuing this technology and that we should apply the precautionary principle here which states that if there is any chance at a technology causing catastrophic harm, and there is no scientific consensus suggesting that the harm will not occur, then those who wish to develop that technology or pursue that research must prove it to be harmless first see Epstein Mark Bedau and Mark Traint argue against a too strong adherence to the precautionary principle by suggesting that instead we should opt for moral courage in pursuing such an important step in human understanding of life They appeal to the Aristotelian notion of courage, not a headlong and foolhardy rush into the unknown, but a resolute and careful step forward into the possibilities offered by this research.
Information technologies have not been content to remain confined to virtual worlds and software implementations. These technologies are also interacting directly with us through robotics applications. Robotics is an emerging technology but it has already produced a number of applications that have important moral implications. Technologies such as military robotics, medical robotics, personal robotics and the world of sex robots are just some of the already existent uses of robotics that impact on and express our moral commitments see, Anderson and Anderson ; Capurro and Nagenborg ; Lin et al.
There have already been a number of valuable contributions to the growing fields of machine morality and robot ethics roboethics. The introduction of semi and fully autonomous machines, meaning machines that make decisions with little or no human intervention , into public life will not be simple. Towards this end, Wallach has also contributed to the discussion on the role of philosophy in helping to design public policy on the use and regulation of robotics. Military robotics has proven to be one of the most ethically charged robotics applications Lin et al.
Today these machines are largely remotely operated telerobots or semi-autonomous, but over time these machines are likely to become more and more autonomous due to the necessities of modern warfare Singer In the first decades of war in the 21 st century robotic weaponry has been involved in numerous killings of both soldiers and noncombatants Plaw , and this fact alone is of deep moral concern. Gerhard Dabringer has conducted numerous interviews with ethicists and technologists regarding the implications of automated warfare Dabringer Many ethicists are cautious in their acceptance of automated warfare with the provision that the technology is used to enhance ethical conduct in war, for instance by reducing civilian and military casualties or helping warfighters follow International Humanitarian Law and other legal and ethical codes of conduct in war see Lin et al.
A key development in the realm of information technologies is that they are not only the object of moral deliberations but they are also beginning to be used as a tool in moral deliberation itself. Since artificial intelligence technologies and applications are a kind of automated problem solvers, and moral deliberations are a kind of problem, it was only a matter of time before automated moral reasoning technologies would emerge.
This is still only an emerging technology but it has a number of very interesting moral implications which will be outlined below. The coming decades are likely to see a number of advances in this area and ethicists need to pay close attention to these developments as they happen.
Susan and Michael Anderson have collected a number of articles regarding this topic in their book, Machine Ethics , and Rocci Luppicini has a section of his anthology devoted to this topic in the Handbook of Research on Technoethics Patrick Grim has been a longtime proponent of the idea that philosophy should utilize information technologies to automate and illustrate philosophical thought experiments Grim et al.
Peter Danielson has also written extensively on this subject beginning with his book Modeling Rationality, Morality, and Evolution with much of the early research in the computational theory of morality centered on using computer models to elucidate the emergence of cooperation between simple software AI or ALife agents Sullins Luciano Floridi and J. Sanders argue that information as it is used in the theory of computation can serve as a powerful idea that can help resolve some of the famous moral conundrums in philosophy such as the nature of evil , The propose that along with moral evil and natural evil, both concepts familiar to philosophy see entry on the problem of evil ; we add a third concept they call artificial evil Floridi and Sanders contend that if we do this then we can see that the actions of artificial agents.
Floridi and Sanders Evil can then be equated with something like information dissolution, where the irretrievable loss of information is bad and the preservation of information is good Floridi and Sanders This idea can move us closer to a way of measuring the moral impacts of any given action in an information environment.
Early in the twentieth century the American philosopher John Dewey see entry on John Dewey proposed a theory of inquiry based on the instrumental uses of technology. Dewey had an expansive definition of technology which included not only common tools and machines but information systems such as logic, laws and even language as well Hickman This is a helpful standpoint to take as it allows us to advance the idea that an information technology of morality and ethics is not impossible.
As well as allowing us to take seriously the idea that the relations and transactions between human agents and those that exist between humans and their artifacts have important ontological similarities. While Dewey could only dimly perceive the coming revolutions in information technologies, his theory is useful to us still because he proposed that ethics was not only a theory but a practice and solving problems in ethics is like solving problems in algebra Hickman If he is right, then an interesting possibility arises, namely the possibility that ethics and morality are computable problems and therefore it should be possible to create an information technology that can embody moral systems of thought.
Engineers do not argue in terms of reasoning by categorical imperatives but instead they use:. In short, the rules he comes up with are based on fact and value, I submit that this is the way moral rules ought to be fashioned, namely as rules of conduct deriving from scientific statements and value judgments.
In short ethics could be conceived as a branch of technology. Bunge , Taking this view seriously implies that the very act of building information technologies is also the act of creating specific moral systems within which human and artificial agents will, at least occasionally, interact through moral transactions. Information technologists may therefore be in the business of creating moral systems whether they know it or not and whether or not they want that responsibility.
- The Stones and the Scarlet Thread: An Amazing Story Woven Through the History of Man, As Told by the Gematria of the Bible: New Evidence from the Bibles Number Code, Stonehenge & the Great Pyramid!
- Business Ethics (Stanford Encyclopedia of Philosophy);
- Sucti : les lèvres de mars: Premières amours (French Edition)!
- Ethical and Moral Governance is Needed to Promote Sustainable Values;
- What Determines Ethical Behavior in Public Organizations: Is It Rules or Leadership?;
- Must the president be a moral leader?!
- Mel Gibson (Spanish Edition)!
The most comprehensive literature that argues in favor of the prospect of using information technology to create artificial moral agents is that of Luciano Floridi , , , b, b , and Floridi with Jeff W. Sanders , , Floridi recognizes that issues raised by the ethical impacts of information technologies strain our traditional moral theories. To relieve this friction he argues that what is needed is a broader philosophy of information After making this move, Floridi claims that information is a legitimate environment of its own and that has its own intrinsic value that is in some ways similar to the natural environment and in other ways radically foreign but either way the result is that information is on its own a thing that is worthy of ethical concern.
Floridi uses these ideas to create a theoretical model of moral action using the logic of object oriented programming. Note that there is no assumption of the ontology of the agents concerned in the moral relationship modeled and these agents can be any mixture or artificial or natural in origin Sullins a. While scholars recognize that we are still some time from creating information technology that would be unequivocally recognized as an artificial moral agent, there are strong theoretical arguments in suggesting that automated moral reasoning is an eventual possibility and therefore it is an appropriate area of study for those interested in the moral impacts of information technologies.
Aristotle, General Topics: ethics artificial intelligence: logic and computing: and moral responsibility Dewey, John: political philosophy ethics, biomedical: theory evil: problem of information technology: phenomenological approaches to ethics and life pornography: and censorship property: intellectual Turing test. Introduction 2. The Moral Challenges of Information Technology 2.
Specific Moral Challenges at the Cultural Level 3. Information Technologies of Morality 4.
- Corporate Governance Convergence and Moral Relativism;
- We should be teaching morals and ethics in our schools.
- Business Ethics.
- Methods Of Reasoning About Morality?
- SPECIAL REPORT: THE FUTURE OF ETHICS.
Introduction Information technology is ubiquitous in the lives of people across the globe. The Moral Challenges of Information Technology The move from one set of dominant information technologies to another is always morally contentious. Thamus is not pleased with the gift and replies, If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. Phaedrus, section a Socrates, who was adept at quoting lines from poems and epics and placing them into his conversations, fears that those who rely on writing will never be able to truly understand and live by these words.
Written words, …seem to talk to you as though they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you the same thing forever. Helen Nissenbaum observes that, [w]here previously, physical barriers and inconvenience might have discouraged all but the most tenacious from ferreting out information, technology makes this available at the click of a button or for a few dollars Nissenbaum and since the time when she wrote this the gathering of personal data has become more automated and cheaper.
Specific Moral Challenges at the Cultural Level In the section above, the focus was on the moral impacts of information technologies on the individual user. When sharing information on SNS, it is the responsibility of the one desiring to share information to verify the accuracy of the information before sharing it. A user of SNS should not post information about themselves that they feel they may want to retract at some future date.
Furthermore, users of SNS should not post information that is the product of the mind of another individual unless they are given consent by that individual. In both cases, once the information is shared, it may be impossible to retract. It is the responsibility of the SNS user to determine the authenticity of a person or program before allowing the person or program access to the shared information. Even before companies like Facebook were making huge profits, there were those warning of the dangers of the cult of transparency with warning such as: …it is not surprising that public distrust has grown in the very years in which openness and transparency have been so avidly pursued.
Aycock and Sullins This lag is constantly exploited by malware producers and in this model there is an ever-present security hole that is impossible to fill. Thus it is important that security professionals do not overstate their ability to protect systems, by the time a new malicious program is discovered and patched, it has already done significant damage and there is currently no way to stop this Aycock and Sullins There are other cogent critiques of this argument but none as simple as the realization that: …there is, after all, a limit to how small things can get before they simply melt.
Information Technologies of Morality A key development in the realm of information technologies is that they are not only the object of moral deliberations but they are also beginning to be used as a tool in moral deliberation itself. Floridi and Sanders Evil can then be equated with something like information dissolution, where the irretrievable loss of information is bad and the preservation of information is good Floridi and Sanders Bunge , Taking this view seriously implies that the very act of building information technologies is also the act of creating specific moral systems within which human and artificial agents will, at least occasionally, interact through moral transactions.
Bibliography Adam, A. Anderson, M. Anderson eds. Arkin, R. Arquilla, J. Asaro, P. Au-Yeung A. Aycock, J. Bartell, C. Baase, S. Bedau, M. Parke eds. Bostrom, N. Gordijn and R. Chadwick eds , Berlin: Springer, pp. Brey, P. Bunge, M. Bynum, T.
1. Varieties of business ethics
Capurro, R. Cisco Systems, Inc. Voss, M. Taylor, A. Delextrat, A.
Constructing a morality of caring: codes and values in Australian carer discourse.
Ogunleye, and G. Danielson, P. Dabringer, G. Dodig-Crnkovic, G. Epstein, R. Epstein, L. International Economic Review 21 2 : — Ess, C. Facebook, Inc.
Floridi, L. Gelernter, D. Baileya, T. Lambirtha, and W. Grim, P. Mar, and P. Grodzinsky, F. Hansen, R. Himma, K. Tavanni eds. Luppicini and Rebecca Adell Eds.
Related Morals and Ethics: Character and Governance
Copyright 2019 - All Right Reserved