Skip to content Skip to navigation


You are here: Home » Content » Three Frameworks for Ethical Decision Making and Good Computing Reports



What is a lens?

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • EAC Toolkit display tagshide tags

    This module is included inLens: Collaborative Development of Ethics Across the Curriculum Resources and Sharing of Best Practices
    By: University of Puerto Rico at Mayaguez - College of Business Administration

    Click the "EAC Toolkit" link to see all content affiliated with them.

    Click the tag icon tag icon to display tags associated with this content.

Recently Viewed

This feature requires Javascript to be enabled.


(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.

Three Frameworks for Ethical Decision Making and Good Computing Reports

Module by: William Frey. E-mail the author

Summary: This module provides three frameworks that are essential to professional and occupational ethics classes being taught at the University of Puerto Rico - Mayaguez during the academic year of 2006-7. The first framework converts the Software Development Cycle into a decision-making framwork consisting of problem specification, solution generation, solution testing, and solution implementation. The second framework zeros in on the solution testing phase of the software development cycle by offering four tests to evaluate and rank solutions in terms of their ethical implications. The third framework offers a feasibility test designed to identify obstacles to implementing solutions that arise from situational constraints like resource, interest, and technical limitations. These frameworks are abbreviated from materials that will eventually be published in Good Computing: A Virtue Approach to Computer Ethics that is being authored by Chuck Huff, William Frey, and Jose Cruz-Cruz. They can also be supplimented by consulting and Engineering Ethics: Concepts and Cases by Rabins, Harris, and Pritchard. This module is being developed as a part of an NSF-funded project, "Collaborative Development of Ethics Across the Curriculum Resources and Sharing of Best Practices," NSF SES 0551779.

Note: You are viewing an old version of this document. The latest version is available here.

Module Introduction

In this module you will learn and practice three frameworks designed to integrate ethics into decision making in the areas of practical and occupational ethics. The first framework divides the decision making process into four stages: problem specification, solution generation, solution testing, and solution implementation. It is based on an analogy between ethics and design problems that is detailed in a table presented below. The second framework focuses on the process of solution testing by providing four tests that will help you to evaluate and rank alternative courses of action. The reversibility, harm/beneficence, and public identification tests each "encapsulate" or summarize an important ethical theory. A value realization test assesses courses of action in terms of their ability to realize or harmonize different moral and nonmoral values. Finally, a feasibility test will help you to uncover interest, resource, and technical constraints that will affect and possibly impede the realization of your solution or decision. Taken together, these three frameworks will help steer you toward designing and implementing ethical decisions the professional and occupational areas.

Two online resources provide more extensive background information. The first,, provides background information on the ethics tests, socio-technical analysis, and intermediate moral concepts. The second,, explores in more detail the analogy between ethics and design problems. Much of this information will be published in Good Computing: A Virtue Approach to Computer Ethics, a textbook of cases and decision making techniques in computer ethics that is being authored by Chuck Huff, William Frey, and Jose A. Cruz-Cruz.

Problem-Solving or Decision-Making Framework: Analogy between ethics and design

Traditionally, decision making frameworks in professional and occupational ethics have been taken from rational decision procedures used in economics. While these are useful, they lead one to think that ethical decisions are already "out there" waiting to be discovered. In contrast, taking a design approach to ethical decision making emphasizes that ethical decisions must be created, not discovered. This, in turn, emphasizes the importance of moral imagination and moral creativity. Carolyn Whitbeck in Ethics in Engineering Practice and Research describes this aspect of ethical decision making through the analogy she draws between ethics and design problems in chapter one. Here she rejects the idea that ethical problems are multiple choice problems. We solve ethical problems not by choosing between ready made solutions given with the situation; rather we use our moral creativity and moral imagination to design these solutions. Chuck Huff builds on this by modifying the design method used in software engineering so that it can help structure the process of framing ethical situations and creating actions to bring these situations to a successful and ethical conclusion. The key points in the analogy between ethical and design problems are summarized in the table presented just below.

Table 1
Analogy between design and ethics problem-solving
Design Problem Ethical Problem
Construct a prototype that optimizes (or satisfices) designated specifications Construct a solution that integrates and realizes ethical values (justice, responsibility, reasonableness, respect, and safety)
Resolve conflicts between different specifications by means of integration Resolve conflicts between values (moral vs. moral or moral vs. non-moral) by integration
Test prototype over the different specifications Test solution over different ethical considerations encapsulated in ethics tests
Implement tested design over background constraints Implement ethically tested solution over resource, interest, and technical constraints

Software Development Cycle: Four Stages

(1) problem specification, (2) solution generation, (3) solution testing, and (4) solution implementation.

Problem specification

Problem specification involves exercising moral imagination to specify the socio-technical system (including the stakeholders) that will influence and will be influenced by the decision we are about to make. Stating the problem clearly and concisely is essential to design problems; getting the problem right helps structure and channel the process of designing and implementing the solution. There is no algorithm available to crank out effective problem specification. Instead, we offer a series of guidelines or rules of thumb to get you started in a process that is accomplished by the skillful exercise of moral imagination.

For a broader problem framing model see Harris, Pritchard, and Rabins, Engineering Ethics: Concepts and Cases, 2nd Edition, Belmont, CA: Wadsworth, 2000, pp. 30-56. See also Cynthia Brincat and Victoria Wike, Morality and Professional Life: Values at Work, New Jersey: Prentice Hall, 1999.

Different Ways of Specifying the Problem

  • Many problems can be specified as disagreements. For example, you disagree with your supervisor over the safety of the manufacturing environment. Disagreements over facts can be resolved by gathering more information. Disagreements over concepts (you and your supervisor have different ideas of what safety means) require working toward a common definition.
  • Other problems involve conflicting values. You advocate installing pollution control technology because you value environmental quality and safety. Your supervisor resists this course of action because she values maintaining a solid profit margin. This is a conflict between a moral value (safety and environmental quality) and a nonmoral value (solid profits). Moral values can also conflict with one another in a given situation. Using John Doe lawsuits to force Internet Service Providers to reveal the real identities of defamers certainly protects the privacy and reputations of potential targets of defamation. But it also places restrictions on legitimate free speech by making it possible for powerful wrongdoers to intimidate those who would publicize their wrongdoing. Here the moral values of privacy and free speech are in conflict. Value conflicts can be addressed by harmonizing the conflicting values, compromising on conflicting values by partially realizing them, or setting one value aside while realizing the other (=value trade offs).
  • If you specify your problem as a disagreement, you need to describe the facts or concepts about which there is disagreement.
  • If you specify your problem as a conflict, you need to describe the values that conflict in the situation.
  • One useful way of specifying a problem is to carry out a stakeholder analysis. A stakeholder is any group or individual that has a vital interest at risk in the situation. Stakeholder interests frequently come into conflict and solving these conflicts requires developing strategies to reconcile and realize the conflicting stakes.
  • Another way of identifying and specifying problems is to carry out a socio-technical analysis. Socio-technical systems (STS) embody values. Problems can be anticipated and prevented by specifying possible value conflicts. Integrating a new technology, procedure, or policy into a socio-technical system can create three kinds of problem. (1) Conflict between values in the technology and those in the STS. For example, when an attempt is made to integrate an information system into the STS of a small business, the values present in an information system can conflict with those in the socio-technical system. (Workers may feel that the new information system invades their privacy.) (2) Amplification of existing value conflicts in the STS. The introduction of a new technology may magnify an existing value conflict. Digitalizing textbooks may undermine copyrights because digital media is easy to copy and disseminate on the Internet. (3) Harmful consequences. Introducing something new into a socio-technical system may set in motion a chain of events that will eventually harm stakeholders in the socio-technical system. For example, giving laptop computers to public school students may produce long term environmental harm when careless disposal of spent laptops releases toxic materials into the environment.
  • The following table helps summarize some of these problem categories and then outlines generic solutions.
Table 2
Problem Type Sub-Type Solution Outline
Factual Type and mode of gathering information
Conceptual Concept in dispute and method for agreeing on its definition
Moral vs. Moral
Non-moral vs. moral
Non-moral vs. non-moral
Value Integrative Partially Value Integrative Trade Off
Social Justice
Value Realization
Strategy for maintaining integrity Strategy for restoring justice Value integrative, design strategy
Intermediate Moral Value Public Welfare, Faithful Agency, Professional Integrity, Peer Collegiality Realizing Value Removing value conflicts Prioritizing values for trade offs

Instructions for Using Problem Classification Table

  1. Is your problem a conflict? Moral versus moral value? Moral versus non-moral values? Non-moral versus non-moral values? Identify the conflicting values as concisely as possible. Example: In Toysmart, the financial values of creditors come into conflict with the privacy of individuals in the data base: financial versus privacy values.
  2. Is your problem a disagreement? Is the disagreement over basic facts? Are these facts observable? Is it a disagreement over a basic concept? What is the concept? Is it a factual disagreement that, upon further reflection, changes into a conceptual disagreement?
  3. Does your problem arise from an impending harm? What is the harm? What is its magnitude? What is the probability that it will occur?
  4. If your problem is a value conflict then can these values be fully integrated in a value integrating solution? Or must they be partially realized in a compromise or traded off against one another?
  5. If your problem is a factual disagreement, what is the procedure for gathering the required information, if this is feasible?
  6. If your problem is a conceptual disagreement, how can this be overcome? By consulting a government policy or regulation? (OSHA on safety for example.) By consulting a theoretical account of the value in question? (Reading a philosophical analysis of privacy.) By collecting past cases that involve the same concept and drawing analogies and comparisons to the present case?

If you are having problems specifying your problem

  • Try identifying the stakeholders. Stakeholders are any group or individual with a vital interest at stake in the situation at hand.
  • Project yourself imaginatively into the perspectives of each stakeholders. How does the situation look from their standpoint? What are their interests? How do they feel about their interests?
  • Compare the results of these different imaginative projections. Do any stakeholder interests conflict? Do the stakeholders themselves stand in conflict?
  • If the answer to one or both of these questions is "yes" then this is your problem statement. How does one reconcile conflicting stakeholders or conflicting stakeholder interests in this situation?

Framing Your Problem

  • We miss solutions to problems because we choose to frame them in only one way.
  • For example, the Mountain Terrorist Dilemma is usually framed in only one way: as a dilemma, that is, a forced decision between two equally undesirable alternatives. (Gilbane Gold is also framed as a dilemma: blow the whistle on Z-Corp or go along with the excess polution.)
  • Framing a problem differently opens up new horizons of solution. Your requirement from this point on in the semester is to frame every problem you are assigned in at least two different ways.
  • For examples of how to frame problems using socio-technical system analysis see module m14025.
  • These different frames are summarized in the next box below.

Different Frames for Problems

  • Technical Frame: Engineers frame problems technically, that is, they specify a problem as raising a technical issue and requiring a technical design for its resolution. For example, in the Hughes case, a technical frame would raise the problem of how to streamline the manufacturing and testing processes of the chips.
  • Physical Frame: In the Laminating Press case, the physical frame would raise the problem of how the layout of the room could be changed to reduce the white powder. Would better ventilation eliminate or mitigate the white powder problem?
  • Social Frame: In the "When in Aguadilla" case, the Japanese engineer is uncomfortable working with the Puerto Rican woman engineer because of social and cultural beliefs concerning women still widely held by men in Japan. Framing this as a social problem would involve asking whether there would be ways of getting the Japanese engineer to see things from the Puerto Rican point of view.
  • Financial or Market-Based Frames: The DOE, in the Risk Assessment case below, accuses the laboratory and its engineers of trying to extend the contract to make more money. The supervisor of the head of the risk assessment team pressures the team leader to complete the risk assessment as quickly as possible so as not to lose the contract. These two framings highlight financial issues.
  • Managerial Frame: As the leader of the Puerto Rican team in the "When in Aguadilla" case, you need to exercise leadership in your team. The refusal of the Japanese engineer to work with a member of your team creates a management problem. What would a good leader, a good manager, do in this situation? What does it mean to call this a management problem? What management strategies would help solve it?
  • Legal Frame: OSHA may have clear regulations concerning the white powder produced by laminating presses. How can you find out about these regulations? What would be involved in complying with them? If they cost money, how would you get this money? These are questions that arise when you frame the Laminating Press case as a legal problem.
  • Environmental Framing: Finally, viewing your problem from an environmental frame leads you to consider the impact of your decision on the environment. Does it harm the environment? Can this harm be avoided? Can it be mitigated? Can it be offset? (Could you replant elsewhere the trees you cut down to build your new plant?) Could you develop a short term environmental solution to "buy time" for designing and implementing a longer term solution? Framing your problem as an environmental problem requires that you ask whether this solution harms the environment and whether this harming can be avoided or remedied in some other way.

Solution Generation

In solution generation, agents exercise moral creativity by brainstorming to come up with solution options designed to resolve the disagreements and value conflicts identified in the problem specification stage. Brainstorming is crucial to generating nonobvious solutions to difficult, intractable problems. This process must take place within a non-polarized environment where the members of the group respect and trust one another. (See the module on the Ethics of Group Work for more information on how groups can be successful and pitfalls that commonly trip up groups.) Groups effectively initiate the brainstorming process by suspending criticism and analysis. After the process is completed (say, by meeting a quota), then participants can refine the solutions generated by combining them, eliminating those that don't fit the problem, and ranking them in terms of their ethics and feasibility. If a problem can't be solved, perhaps it can be dissolved through reformulation. If an entire problem can't be solve, perhaps the problem can be broken down into parts some of which can be readily solved.

Having trouble generating solutions?

  • One of the most difficult stages in problem solving is to jump start the process of brainstorming solutions. If you are stuck then here are some generic options guaranteed to get you "unstuck."
  • Gather Information: Many disagreements can be resolved by gathering more information. Because this is the easiest and least painful way of reaching consensus, it is almost always best to start here. Gathering information may not be possible because of different constraints: there may not be enough time, the facts may be too expensive to gather, or the information required goes beyond scientific or technical knowledge. Sometimes gathering more information does not solve the problem but allows for a new, more fruitful formulation of the problem. Harris, Pritchard, and Rabins in Engineering Ethics: Concepts and Cases show how solving a factual disagreement allows a more profound conceptual disagreement to emerge.
  • Nolo Contendere. Nolo Contendere is latin for not opposing or contending. Your interests may conflict with your supervisor but he or she may be too powerful to reason with or oppose. So your only choice here is to give in to his or her interests. The problem with nolo contendere is that non-opposition is often taken as agreement. You may need to document (e.g., through memos) that your choosing not to oppose does not indicate agreement.
  • Negotiate. Good communication and diplomatic skills may make it possible to negotiate a solution that respects the different interests. Value integrative solutions are designed to integrate conflicting values. Compromises allow for partial realization of the conflicting interests. (See the module, The Ethics of Team Work, for compromise strategies such as logrolling or bridging.) Sometimes it may be necessary to set aside one's interests for the present with the understanding that these will be taken care of at a later time. This requires trust.
  • Oppose. If nolo contendere and negotiation are not possible, then opposition may be necessary. Opposition requires marshalling evidence to document one's position persuasively and impartially. It makes use of strategies such as leading an "organizational charge" or "blowing the whistle." For more on whistle-blowing consult the discussion of whistle blowing in the Hughes case that can be found at computing cases.
  • Exit. Opposition may not be possible if one lacks organizational power or documented evidence. Nolo contendere will not suffice if non-opposition implicates one in wrongdoing. Negotiation will not succeed without a necessary basis of trust or a serious value integrative solution. As a last resort, one may have to exit from the situation by asking for reassignment or resigning.

Refining solutions

  • Are any solutions blatantly unethical or unrealizable?
  • Do any solutions overlap? Can these be integrated into broader solutions?
  • Can solutions be brought together as courses of action that can be pursued simultaneously?
  • Go back to the problem specification? Can any solutions be eliminated because they do not address the problem? (Or can the problem be revised to better fit what, intuitively, is a good solution.)
  • Can solutions be brought together as successive courses of action? For example, one solution represents Plan A; if it does not work then another solution, Plan B, can be pursued. (You negotiate the problem with your supervisor. If she fails to agree, then you oppose your supervisor on the grounds that her position is wrong. If this fails, you conform or exit.)
  • The goal here is to reduce the solution list to something manageable, say, a best, a second best, and a third best. Try adding a bad solution to heighten strategic points of comparison. The list should be short so that the remaining solutions can be intensively examined as to their ethics and feasibility.

Solution Testing: The solutions developed in the second stage must be tested in various ways.

  1. Reversibility: Is the solution reversible between the agent and key stakeholders?
  2. Harm/Beneficence: Does the solution minimize harm? Does it produce benefits that are justly distributed among stakeholders?
  3. Publicity: Is this action one with which you are willing to be publicly identified? Does it identify you as a moral person? An irresponsible person? A person of integrity? An untrustworthy person?
  4. Code: Does the solution violate any provisions of a relevant code of ethics? Can it be modified to be in accord with a code of ethics? Does it address any aspirations a code might have? (Engineers: Does this solution hold paramount the health, safety, and welfare of the public?)
  5. Global Feasibility: Do any obstacles to implementation present themselves at this point? Are there resources, techniques, and social support for realizing the solution or will obstacles arise in one or more of these general areas? At this point, assess globally the feasibility of each solution.
  6. The solution evaluation matrix presented just below models and summarizes the solution testing process.
Table 3
Solution/Test Reversibility Harm/ Beneficence Publicity/Values Code Global Feasibility
Description Is the solution reversible with stakeholders? Does it honor basic rights? Does the solution produce the best benefit/harm ratio? Does the solution maximize utility? Does the solution express and integrate key virtues? Does the solution violate any code provisions? Are there constraints or obstacles to realizing the solution?
Best solution          
Second Best          

Solution Implementation

The chosen solution must be examined in terms of how well it responds to various situational constraints that could impede its implementation. What will be its costs? Can it be implemented within necessary time constraints? Does it honor recognized technical limitations or does it require pushing these back through innovation and discovery? Does it comply with legal and regulatory requirements? Finally, could the surrounding organizational, political, and social environments give rise to obstacles to the implementation of the solution? In general this phase requires looking at interest, technical, and resource constraints or limitations. A Feasibility Matrix helps to guide this process.

The Feasibility Tests focuses on situational constraints. How could these hinder the implementation of the solution? Should the solution be modified to ease implementation? Can the constraints be removed or remodeled by negotiation, compromise, or education? Can implementation be facilitated by modifying both the solution and changing the constraints?

Table 4
Feasibility Matrix
Resource Constraints Technical Constraints Interest Constraints
Time   Organizational
Cost Applicable Technology Legal
Materials Manufacturability Social, Political, Cultural

Different Feasibility Constraints

  1. The Feasibility Test identifies the constraints that could interfere with realizing a solution. This test also sorts out these constraints into resource (time, cost, materials), interest (individuals, organizations, legal, social, political), and technical limitations. By identifying situational constraints, problem-solvers can anticipate implementation problems and take early steps to prevent or mitigate them.
  2. Time. Is there a deadline within which the solution has to be enacted? Is this deadline fixed or negotiable?
  3. Financial. Are there cost constraints on implementing the ethical solution? Can these be extended by raising more funds? Can they be extended by cutting existing costs? Can agents negotiate for more money for implementation?
  4. Technical. Technical limits constrain the ability to implement solutions. What, then, are the technical limitations to realizing and implementing the solution? Could these be moved back by modifying the solution or by adopting new technologies?
  5. Manufacturability. Are there manufacturing constraints on the solution at hand? Given time, cost, and technical feasibility, what are the manufacturing limits to implementing the solution? Once again, are these limits fixed or flexible, rigid or negotiable?
  6. Legal. How does the proposed solution stand with respect to existing laws, legal structures, and regulations? Does it create disposal problems addressed in existing regulations? Does it respond to and minimize the possibility of adverse legal action? Are there legal constraints that go against the ethical values embodied in the solution? Again, are these legal constraints fixed or negotiable?
  7. Individual Interest Constraints. Individuals with conflicting interests may oppose the implementation of the solution. For example, an insecure supervisor may oppose the solution because he fears it will undermine his authority. Are these individual interest constraints fixed or negotiable?
  8. Organizational. Inconsistencies between the solution and the formal or informal rules of an organization may give rise to implementation obstacles. Implementing the solution may require support of those higher up in the management hierarchy. The solution may conflict with organization rules, management structures, traditions, or financial objectives. Once again, are these constraints fixed or flexible?
  9. Social, Cultural, or Political. The socio-technical system within which the solution is to be implemented contains certain social structures, cultural traditions, and political ideologies. How do these stand with respect to the solution? For example, does a climate of suspicion of high technology threaten to create political opposition to the solution? What kinds of social, cultural, or political problems could arise? Are these fixed or can they be altered through negotiation, education, or persuasion?

Ethics Tests For Solution Evaluation

Three ethics tests (reversibility, harm/beneficence, and public identification) encapsulate three ethical approaches (deontology, utilitarianism, and virtue ethics) and form the basis of stage three of the SDC, solution testing. A fourth test (a value realization test) builds upon the public identification/virtue ethics test by evaluating a solution in terms of the values it harmonizes, promotes, protects, or realizes. Finally a code test provides an independent check on the ethics tests and also highlights intermediate moral concepts such as safety, health, welfare, faithful agency, conflict of interest, confidentiality, professional integrity, collegiality, privacy, property, free speech, and equity/access). The following section provides advice on how to use these tests. More information can be found at

Setting Up the Ethics Tests: Pitfalls to avoid

Set-Up Pitfalls: Mistakes in this area lead to the analysis becoming unfocused and getting lost in irrelevancies. (a) Agent-switching where the analysis falls prey to irrelevancies that crop up when the test application is not grounded in the standpoint of a single agent, (b) Sloppy action-description where the analysis fails because no specific action has been tested, (c) Test-switching where the analysis fails because one test is substituted for another. (For example, the public identification and reversibility tests are often reduced to the harm/beneficence test where harmful consequences are listed but not associated with the agent or stakeholders.)

Set up the test

  1. Identify the agent (the person who is going to perform the action)
  2. Describe the action or solution that is being tested (what the agent is going to do or perform)
  3. Identify the stakeholders (those individuals or groups who are going to be affected by the action), and their stakes (interests, values, goods, rights, needs, etc.
  4. Identify, sort out, and weigh the consequences (the results the action is likely to bring about)

Harm/Beneficence Test

  • What harms would accompany the action under consideration? Would it produce physical or mental suffering, impose financial or non-financial costs, or deprive others of important or essential goods?
  • What benefits would this action bring about? Would it increase safety, quality of life, health, security, or other goods both moral and non-moral?
  • What is the magnitude of each these consequences? Magnitude includes likelihood it will occur (probability), the severity of its impact (minor or major harm) and the range of people affected.
  • Identify one or two other viable alternatives and repeat these steps for them. Some of these may be modifications of the basic action that attempt to minimize some of the likely harms. These alternatives will establish a basis for assessing your alternative by comparing it with others.
  • Decide on the basis of the test which alternative produces the best ratio of benefits to harms?
  • Check for inequities in the distribution of harms and benefits. Do all the harms fall on one individual (or group)? Do all of the benefits fall on another? If harms and benefits are inequitably distributed, can they be redistributed? What is the impact of redistribution on the original solution imposed?

Pitfalls of the Harm/Beneficence Test

  1. “Paralysis of Analysis" comes from considering too many consequences and not focusing only on those relevant to your decision.
  2. Incomplete Analysis results from considering too few consequences. Often it indicates a failure of moral imagination which, in this case, is the ability to envision the consequences of each action alternative.
  3. Failure to compare different alternatives can lead to a decision that is too limited and one-sided.
  4. Failure to weigh harms against benefits occurs when decision makers lack the experience to make the qualitative comparisons required in ethical decision making.
  5. Finally, justice failures result from ignoring the fairness of the distribution of harms and benefits. This leads to a solution which may maximize benefits and minimize harms but still give rise to serious injustices in the distribution of these benefits and harms.

Reversibility Test

  1. Set up the test by (i) identifying the agent, (ii) describing the action, and (iii) identifying the stakeholders and their stakes.
  2. Use the stakeholder analysis to identify the relations to be reversed.
  3. Reverse roles between the agent (you) and each stakeholder: put them in your place (as the agent) and yourself in their place (as the one subjected to the action).
  4. If you were in their place, would you still find the action acceptable?

Cross Checks for Reversibility Test (These questions help you to check if you have carried out the reversibility test properly.)

  • Does the proposed action treat others with respect? (Does it recognize their autonomy or circumvent it?)
  • Does the action violate the rights of others? (Examples of rights: free and informed consent, privacy, freedom of conscience, due process, property, freedom of expression)
  • Would you recommend that this action become a universal rule?
  • Are you, through your action, treating others merely as means?

Pitfalls of the Reversibility Test

  • Leaving out a key stakeholder relation
  • Failing to recognize and address conflicts between stakeholders and their conflicting stakes
  • Confusing treating others with respect with capitulating to their demands (“Reversing with Hitler”)
  • Failing to reach closure, i.e., an overall, global reversal assessment that takes into account all the stakeholders the agent has reversed with.

Steps in Applying the Public Identification Test

  • Set up the analysis by identifying the agent, describing the action, and listing the key values or virtues at play in the situation.
  • Association the action with the agent.
  • Describe what the action says about the agent as a person. Does it reveal him or her as someone associated with a virtue or a vice?

Alternative Version of Public Identification

  • Does the action under consideration realize justice or does it pose an excess or defect of justice?
  • Does the action realize responsibility or pose an excess or defect of responsibility?
  • Does the action realize reasonableness or pose too much or too little reasonableness?
  • Does the action realize honesty or pose too much or too little honesty?
  • Does the action realize integrity or pose too much or too little integrity?

Pitfalls of Public Identification

  • Action not associated with agent. The most common pitfall is failure to associate the agent and the action. The action may have bad consequences and it may treat individuals with respect but these points are not as important in the context of this test as what they imply about the agent as a person who deliberately performs such an action.
  • Failure to specify moral quality, virtue, or value. Another pitfall is to associate the action and agent but only ascribe a vague or ambiguous moral quality to the agent. To say, for example, that willfully harming the public is bad fails to zero in on precisely what moral quality this ascribes to the agent. Does it render him or her unjust, irresponsible, corrupt, dishonest, or unreasonable? The virtue list given above will help to specify this moral quality.

Code of Ethics Test

  • Does the action hold paramount the health, safety, and welfare of the public, i.e., those affected by the action but not able to participate in its design or execution?
  • Does the action maintain faithful agency with the client by not abusing trust, avoiding conflicts of interest, and maintaining confidences?
  • Is the action consistent with the reputation, honor, dignity, and integrity of the profession?
  • Does the action serve to maintain collegial relations with professional peers?

Meta Tests

  • The ethics and feasibility tests will not always converge on the same solution. There is a complicated answer for why this is the case but the simple version is that the tests do not always agree on a given solution because each test (and the ethical theory it encapsulates) covers a different domain or dimension of the action situation. Meta tests turn this disadvantage to your advantage by feeding the interaction between the tests on a given solution back into the evaluation of that solution.
  • When the ethics tests converge on a given solution, this convergence is a sign of the strength and robustness of the solution and counts in its favor.
  • When a given solution responds well to one test but does poorly under another, this is a sign that the solution needs further development and revision. It is not a sign that one test is relevant while the others are not. Divergence between test results is a sign that the solution is weak.

Application Exercise

You will now practice the four stages of decision making with a real world case. This case, Risk Assessment, came from a retreat on Business, Science, and Engineering Ethics held in Puerto Rico in December 1998. It was funded by the National Science Foundation, Grant SBR 9810253.

Risk Assessment Scenario

Case Scenario: You supervise a group of engineers working for a private laboratory with expertise in nuclear waste disposal and risk assessment. The DOE (Department of Energy) awarded a contract to your laboratory six years ago to do a risk assessment of various nuclear waste disposal sites. During the six years in which your team has been doing the study, new and more accurate calculations in risk assessment have become available. Your laboratory’s study, however, began with the older, simpler calculations and cannot integrate the newer without substantially delaying completion. You, as the leader of the team, propose a delay to the DOE on the grounds that it is necessary to use the more advanced calculations. Your position is that the laboratory needs more time because of the extensive calculations required; you argue that your group must use state of the art science in doing its risk assessment. The DOE says you are using overly high standards of risk assessment to prolong the process, extend the contract, and get more money for your company. They want you to use simpler calculations and finish the project; if you are unwilling to do so, they plan to find another company that thinks differently. Meanwhile, back at the laboratory, your supervisor (a high level company manager) expresses to you the concern that while good science is important in an academic setting, this is the real world and the contract with the DOE is in jeopardy. What should you do?

Part One: Problem Specification

  1. Specify the problem in the above scenario. Be as concise and specific as possible
  2. Is your problem best specifiable as a disagreement? Between whom? Over what?
  3. Can your problem be specified as a value conflict? What are the values in conflict? Are the moral, nonmoral, or both?

Part Two: Solution Generation

  1. Quickly and without analysis or criticism brainstorm 5 to ten solutions
  2. Refine your solution list. Can solutions be eliminated? (On what basis?) Can solutions be combined? Can solutions be combined as plan a and plan b?
  3. If you specified your problem as a disagreement, how do your solutions resolve the disagreement? Can you negotiate interests over positions? What if your plan of action doesn't work?
  4. If you formulated your problem as a value conflict, how do your solutions resolve this conflict? By integrating the conflicting values? By partially realizing them through a value compromise? By trading one value off for another?

Part Three: Solution Testing

  1. Construct a solution evaluation matrix to compare two to three solution alternatives.
  2. Choose a bad solution and then compare to it the two strongest solutions you have.
  3. Be sure to avoid the pitfalls described above and set up each test carefully.

Part Four: Solution Implementation

  1. Develop an implementation plan for your best solution. This plan should anticipate obstacles and offer means for overcoming them.
  2. Prepare a feasibility table outlining these issues using the table presented above.
  3. Remember that each of these feasibility constraints is negotiable and therefore flexible. If you choose to set aside a feasibility constraint then you need to outline how you would negotiate the extension of that constraint.
Figure 1: Clicking on this figure will allow you to open a presentation designed to introduce problem solving in ethics as analogous to that in design, summarize the concept of a socio-technical system, and provide an orientation in the four stages of problem solving. This presentation was given February 28, 2008 at UPRM for ADMI 6005 students, Special Topics in Research Ethics.
Decision-Making Presentation
Media File: Decision Making Manual V4.pptx

Problem Solving Presentation

Media File: Decision Making Manual V5.pptx

Vigo Socio-Technical System Table and Problems

Media File: Vigo STS.docx

Figure 2: This exercise is designed to give you practice with the three frameworks described in this module. It is based on the case, "When in Aguadilla."
Decision Making Worksheet
Media File: Decision Making Worksheet.docx

Test Rubric Fall 2009: Problem-Solving

Media File: PE_Rubric_EO_S09.docx

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks