Skip to content Skip to navigation


You are here: Home » Content » Three Frameworks for Ethical Decision Making and Good Computing Reports


Recently Viewed

This feature requires Javascript to be enabled.


(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.

Three Frameworks for Ethical Decision Making and Good Computing Reports

Module by: William Frey. E-mail the author

Summary: This module provides three frameworks that are essential to Computer and Engineering Ethics classes being taught at the University of Puerto Rico - Mayaguez during the academic year of 2006-7. The first framework converts the Software Development Cycle into a decision-making framwork consisting of problem specification, solution generation, solution testing, and solution implementation. The second framework zeros in on the solution testing phase of the software development cycle by offering four tests to evaluate and rank solutions in terms of their ethical implications. The third framework offers a feasibility test designed to identify obstacles to implementing solutions that arise from situational constraints like resource, interest, and technical limitations. These frameworks are abbreviated from materials that will eventually be published in Good Computing: A Virtue Approach to Computer Ethics that is being authored by Chuck Huff, William Frey, and Jose Cruz-Cruz. They can also be supplimented by consulting and Engineering Ethics: Concepts and Cases by Rabins, Harris, and Pritchard.

Note: You are viewing an old version of this document. The latest version is available here.

Module Introduction

In this module you will learn and practice three frameworks designed to integrate ethics into decision making in the areas of practical and occupational ethics. The first framework divides the decision making process into four stages: problem specification, solution generation, solution testing, and solution implementation. It is based on an analogy between ethics and design problems that is detailed in a table presented below. The second framework focuses on the process of solution testing by providing four tests that will help you to evaluate and rank alternative courses of action. The reversibility, harm/beneficence, and public identification tests each "encapsulate" or summarize an important ethical theory. A value realization test assesses courses of action in terms of their ability to realize or harmonize different moral and nonmoral values. Finally, a feasibility test will help you to uncover interest, resource, and technical constraints that will affect and possibly impede the realization of your solution or decision. Taken together, these three frameworks will help steer you toward designing and implementing ethical decisions the professional and occupational areas.

Two online resources provide more extensive background information. The first,, provides background information on the ethics tests, socio-technical analysis, and intermediate moral concepts. The second,, explores in more detail the analogy between ethics and design problems. Much of this information will be published in Good Computing: A Virtue Approach to Computer Ethics, a textbook of cases and decision making techniques in computer ethics that is being authored by Chuck Huff, William Frey, and Jose A. Cruz-Cruz.

Problem-Solving or Decision-Making Framework: Software Development Cycle

Traditionally, decision making frameworks in professional and occupational ethics have been taken from rational decision procedures used in economics. While these are useful, they lead one to think that ethical decisions are already "out there" waiting to be discovered. In contrast, taking a design approach to ethical decision making emphasizes that ethical decisions must be created, not discovered. This, in turn, emphasizes the importance of moral imagination and moral creativity. Carolyn Whitbeck in Ethics in Engineering Practice and Research describes this aspect of ethical decision making through the analogy she draws between ethics and design problems in chapter one. Here she rejects the idea that ethical problems are multiple choice problems. We solve ethical problems not by choosing between ready made solutions given with the situation; rather we use our moral creativity and moral imagination to design these solutions. Chuck Huff builds on this by modifying the design method used in software engineering so that it can help structure the process of framing ethical situations and creating actions to bring these situations to a successful and ethical conclusion. The key points in the analogy between ethical and design problems are summarized in the table presented just below.

Table 1
Analogy between design and ethics problem-solving
Design Problem Ethical Problem
Construct a prototype that optimizes (or satisfices) designated specifications Construct a solution that integrates and realizes ethical values (justice, responsibility, reasonableness, respect, and safety)
Resolve conflicts between different specifications by means of integration Resolve conflicts between values (moral vs. moral or moral vs. non-moral) by integration
Test prototype over the different specifications Test solution over different ethical considerations encapsulated in ethics tests
Implement tested design over background constraints Implement ethically tested solution over resource, interest, and technical constraints

Stages in Ethical Decision Making

(1) problem specification, (2) solution generation, (3) solution testing, and (4) solution implementation.

Problem specification

Problem specification involves exercising moral imagination to specify the socio-technical system (including the stakeholders) that will influence and will be influenced by the decision we are about to make. Stating the problem clearly and concisely is essential to design problems; getting the problem right helps structure and channel the process of design and implementing the solution. There is no algorithm available to crank out effective problem specification. Instead, we offer a series of guidelines or rules of thumb to get you started in a process that is accomplished by the skillful exercise of moral imagination.

Different Ways of Specifying the Problem

  • Many problems can be specified as disagreements. For example, you disagree with your supervisor over the safety of the manufacturing environment. Disagreements over facts can be resolved by gathering more information. Disagreement over concepts (you and your supervisor have different ideas of what safety means) require working toward a common definition.
  • Other problems involve conflicting values. You advocate installing pollution control technology because you value environmental quality and safety. Your supervisor resists this course of action because she values maintaining a solid profit margin. This is a conflict between a moral value (safety and environmental quality) and a nonmoral value (solid profits). Moral values can also conflict with one another in a given situation. Using John Doe lawsuits to force Internet Service Providers to reveal the real identities of defamers certainly protects the privacy and reputations of potential targets of defamation. But it also places restrictions on legitimate free speech by making it possible for powerful wrongdoers to intimidate those who would publicize their wrongdoing. Here the moral values of privacy and free speech are in conflict. Value conflicts can be addressed by harmonizing the conflicting values, compromising on conflicting values by partially realizing them, or setting one value aside while realizing the other (=value trade offs).
  • If you specify your problem as a disagreement, you need to describe the facts or concepts about which there is disagreement.
  • If you specify your problem as a conflict, you need to describe the values that conflict in the situation.
  • One useful way of specifying a problem is to carry out a stakeholder analysis. A stakeholder is any group or individual that has a vital interest at risk in the situation. Stakeholder interests frequently come into conflict and solving these conflicts requires developing strategies to reconcile and realize the conflicting stakes.
  • Another way of identifying and specifying problems is to carry out a socio-technical analysis. Socio-technical systems embody values and problems can be anticipated and prevented by specifying possible value conflicts. Integrating an information system into the socio-technical system of a small business can create three kinds of problem. The values present in the information system may conflict with those present in the socio-technical system. (Workers may feel that the new information system invades their privacy.) The introduction of a new technology may magnify an existing value conflict. Digitalizing textbooks may undermine copyrights because digital media is easy to copy and disseminate on the Internet. Finally, introducing something new into a socio-technical system may set in motion a chain of events that will eventually harm different stakeholders in the socio-technical system. Giving laptop computers to public school students may produce long term environmental harm when careless disposal of spent laptops releases toxic materials into the environment.
  • The following table helps summarize some of these problem categories and then outlines generic solutions.
Table 2
Problem Type Sub-Type Solution Outline
Factual Type and mode of gathering information
Conceptual Concept in dispute and method for agreeing on its definition
Moral vs. Moral
Non-moral vs. moral
Non-moral vs. non-moral
Value Integrative Partially Value Integrative Trade Off
Social Justice
Value Realization
Strategy for maintaining integrity Strategy for restoring justice Value integrative, design strategy
Intermediate Moral Value Public Welfare, Faithful Agency, Professional Integrity, Peer Collegiality Realizing Value Removing value conflicts Prioritizing values for trade offs

Solution Generation

In solution generation, agents exercise moral creativity by brainstorming to come up with solution options designed to resolve the disagreements and value conflicts identified in the problem specification stage. Brainstorming is crucial to generating nonobvious solutions to difficult, intractable problems. This process must take place within a non-polarized environment where the members of the group respect and trust one another. (See the module on the Ethics of Group Work for more information on how groups can be successful and pitfalls that commonly trip up groups.) Groups effectively initiate the brainstorming process by suspending criticism and analysis. After the process is completed (say, by meeting a quota), then participants can refine the solutions generated by combining them, eliminating those that don't fit the problem, and ranking them in terms of their ethics and feasibility. If a problem can't be solved, perhaps it can be dissolved through reformulation. If an entire problem can't be solve, perhaps the problem can be broken down into parts some of which can be readily solved.

Solution Testing: The solutions developed in the second stage must be tested in various ways.

  1. Reversibility: Are they reversible between the agent and key stakeholders?
  2. Harm/Beneficence: Do they minimize harm? Do they produce benefits that are justly distributed among stakeholders?
  3. Public Identification: Are these actions with which I am willing to be publicly identified? Does these actions identify me as a moral person?
  4. Value: Do these actions realize key moral values and instantiate moral virtues?
  5. Code: A code test can be added that refers to a professional or occupational code of ethics. Do the solutions comply with the professional’s or practitioner's code of ethics?
  6. The solution evaluation matrix presented just below provides a nice way of modeling and summarizing the process of solution testing.
Table 3
Solution/Test Reversibility Harm/ Beneficence Virtue Value Code
Descrip-tion Is the solution reversible with stakeholders? Does it honor basic rights? Does the solution produce the best benefit/harm ratio? Does the solution maximize utility? Does the solution express and integrate key virtues? Moral values realized? Moral values frustrated? Value conflicts resolved or exacerbated? Does the solution violate any code provisions?
Best solution          
Second Best          

D. Solution implementation:

Solution Implementation

The chosen solution must be examined in terms of how well it responds to various situational constraints that could impede its implementation. What will be its costs? Can it be implemented within necessary time constraints? Does it honor recognized technical limitations or does it require pushing these back through innovation and discovery? Does it comply with legal and regulatory requirements? Finally, could the surrounding organizational, political, and social environments give rise to obstacles to the implementation of the solution? In general this phase requires looking at interest, technical, and resource constraints or limitations. A Feasibility Matrix helps to guide this process.

Table 4
Feasibility Matrix
Resource Constraints Technical Constraints Interest Constraints
Time   Organizational
Cost Applicable Technology Legal
Materials Manufacturability Social, Political, Cultural

II. Ethical Frameworks:

Ethical Frameworks

Three ethics tests (reversibility, harm/beneficence, and public identification) encapsulate three ethical approaches (deontology, utilitarianism, and virtue ethics) and form the basis of stage three of the SDC, solution testing. A fourth test (a value realization test) builds upon the public identification/virtue ethics test by evaluating a solution in terms of the values it harmonizes, promotes, protects, or realizes. Finally a code test provides an independent check on the ethics tests and also highlights intermediate moral concepts such as safety, health, welfare, faithful agency, conflict of interest, confidentiality, professional integrity, collegiality, privacy, property, free speech, and equity/access). The following section provides advice on how to use these tests. More information can be found at

Set-Up Pitfalls: Mistakes in this area lead to the analysis becoming unfocused and getting lost in irrelevancies. (a) Agent-switching where the analysis falls prey to irrelevancies that crop up when the test application is not grounded in the standpoint of a single agent, (b) Sloppy action-description where the analysis fails because no specific action has been tested, (c) Test-switching where the analysis fails because one test is substituted for another. (For example, the public identification and reversibility tests are often reduced to the harm/beneficence test where harmful consequences are listed but not associated with the agent or stakeholders.)

Set up the test

  1. Identify the agent (the person who is going to perform the action)
  2. Describe the action or solution that is being tested (what the agent is going to do or perform)
  3. Identify the stakeholders (those individuals or groups who are going to be affected by the action), and their stakes (interests, values, goods, rights, needs, etc.
  4. Identify, sort out, and weigh the consequences (the results the action is likely to bring about)

Harm/Beneficence Test

  • What harms would accompany the action under consideration? Would it produce physical or mental suffering, impose financial or non-financial costs, or deprive others of important or essential goods?
  • What benefits would this action bring about? Would it increase safety, quality of life, health, security, or other goods both moral and non-moral?
  • What is the magnitude of each these consequences? Magnitude includes likelihood it will occur (probability), the severity of its impact (minor or major harm) and the range of people affected.
  • Identify one or two other viable alternatives and repeat these steps for them. Some of these may be modifications of the basic action that attempt to minimize some of the likely harms. These alternatives will establish a basis for assessing your alternative by comparing it with others.
  • Decide on the basis of the test which alternative produces the best ratio of benefits to harms?
  • Check for inequities in the distribution of harms and benefits. Do all the harms fall on one individual (or group)? Do all of the benefits fall on another? If harms and benefits are inequitably distributed, can they be redistributed? What is the impact of redistribution on the original solution imposed?

Pitfalls of the Harm/Beneficence Test

  1. “Paralysis of Analysis" comes from considering too many consequences and not focusing only on those relevant to your decision.
  2. Incomplete Analysis results from considering too few consequences. Often it indicates a failure of moral imagination which, in this case, is the ability to envision the consequences of each action alternative.
  3. Failure to compare different alternatives can lead to a decision that is too limited and one-sided.
  4. Failure to weigh harms against benefits occurs when decision makers lack the experience to make the qualitative comparisons required in ethical decision making.
  5. Finally, justice failures result from ignoring the fairness of the distribution of harms and benefits. This leads to a solution which may maximize benefits and minimize harms but still give rise to serious injustices in the distribution of these benefits and harms.

Reversibility Test

  1. Set up the test by (i) identifying the agent, (ii) describing the action, and (iii) identifying the stakeholders and their stakes.
  2. Use the stakeholder analysis to identify the relations to be reversed.
  3. Reverse roles between the agent (you) and each stakeholder: put them in your place (as the agent) and yourself in their place (as the one subjected to the action).
  4. If you were in their place, would you still find the action acceptable?

Cross Checks for Reversibility Test (These questions help you to check if you have carried out the reversibility test properly.)

  • Does the proposed action treat others with respect? (Does it recognize their autonomy or circumvent it?)
  • Does the action violate the rights of others? (Examples of rights: free and informed consent, privacy, freedom of conscience, due process, property, freedom of expression)
  • Would you recommend that this action become a universal rule?
  • Are you, through your action, treating others merely as means?

Pitfalls of the Reversibility Test

  • Leaving out a key stakeholder relation
  • Failing to recognize and address conflicts between stakeholders and their conflicting stakes
  • Confusing treating others with respect with capitulating to their demands (“Reversing with Hitler”)
  • Failing to reach closure, i.e., an overall, global reversal assessment that takes into account all the stakeholders the agent has reversed with.

Steps in Applying the Public Identification Test

  • Set up the analysis by identifying the agent, describing the action, and listing the key values or virtues at play in the situation.
  • Association the action with the agent.
  • Describe what the action says about the agent as a person. Does it reveal him or her as someone associated with a virtue or a vice?

Alternative Version of Public Identification

  • Does the action under consideration realize justice or does it pose an excess or defect of justice?
  • Does the action realize responsibility or pose an excess or defect of responsibility?
  • Does the action realize reasonableness or pose too much or too little reasonableness?
  • Does the action realize honesty or pose too much or too little honesty?
  • Does the action realize integrity or pose too much or too little integrity?

Pitfalls of Public Identification

  • Action not associated with agent. The most common pitfall is failure to associate the agent and the action. The action may have bad consequences and it may treat individuals with respect but these points are not as important in the context of this test as what they imply about the agent as a person who deliberately performs such an action.
  • Failure to specify moral quality, virtue, or value. Another pitfall is to associate the action and agent but only ascribe a vague or ambiguous moral quality to the agent. To say, for example, that willfully harming the public is bad fails to zero in on precisely what moral quality this ascribes to the agent. Does it render him or her unjust, irresponsible, corrupt, dishonest, or unreasonable? The virtue list given above will help to specify this moral quality.
  • Etc.

Code of Ethics Test

  • Does the action hold paramount the health, safety, and welfare of the public, i.e., those affected by the action but not able to participate in its design or execution?
  • Does the action maintain faithful agency with the client by not abusing trust, avoiding conflicts of interest, and maintaining confidences?
  • Is the action consistent with the reputation, honor, dignity, and integrity of the profession?
  • Does the action serve to maintain collegial relations with professional peers?

Meta Tests

  • The ethics and feasibility tests will not always converge on the same solution. There is a complicated answer for why this is the case but the simple version is that the tests do not always agree on a given solution because each test (and the ethical theory it encapsulates) covers a different domain or dimension of the action situation. Meta tests turn this disadvantage to your advantage by feeding the interaction between the tests on a given solution back into the evaluation of that solution.
  • When the ethics tests converge on a given solution, this convergence is a sign of the strength and robustness of the solution and counts in its favor.
  • When a given solution responds well to one test but does poorly under another, this is a sign that the solution needs further development and revision. It is not a sign that one test is relevant while the others are not. Divergence between test results is a sign that the solution is weak.

Solution Implementation

  1. This stage requires carrying out a Feasibility Test which identifies constraints that could interfere with realizing a solution. This test also sorts out constraints into resource (time, cost, materials), interest (individuals, organizations, legal, social, political), and technical limitations. By identifying situational constraints, problem-solvers can anticipate implementation problems and take early steps to prevent or mitigate them.
  2. Time. Is there a deadline within which the solution has to be enacted? Is this deadline fixed or negotiable?
  3. Financial. Are there cost constraints on implementing the ethical solution? Can these be extended by raising more funds? Can they be extended by cutting existing costs? Can agents negotiate for more money for implementation?
  4. Technical. Technical limits constrain the ability to implement solutions. What, then, are the technical limitations to realizing and implementing the solution? Could these be moved back by modifying the solution or by adopting new technologies?
  5. Manufacturability. Are there manufacturing constraints on the solution at hand? Given time, cost, and technical feasibility, what are the manufacturing limits to implementing the solution? Once again, are these limits fixed or flexible, rigid or negotiable?
  6. Legal. How does the proposed solution stand with respect to existing laws, legal structures, and regulations? Does it create disposal problems addressed in existing regulations? Does it respond to and minimize the possibility of adverse legal action? Are there legal constraints that go against the ethical values embodied in the solution? Again, are these legal constraints fixed or negotiable?
  7. Individual Interest Constraints. Individuals with conflicting interests may oppose the implementation of the solution. For example, an insecure supervisor may oppose the solution because he fears it will undermine his authority. Are these individual interest constraints fixed or negotiable?
  8. Organizational. Inconsistencies between the solution and the formal or informal rules of an organization may give rise to implementation obstacles. Implementing the solution may require support of those higher up in the management hierarchy. The solution may conflict with organization rules, management structures, traditions, or financial objectives. Once again, are these constraints fixed or flexible?
  9. Social, Cultural, or Political. The socio-technical system within which the solution is to be implemented contains certain social structures, cultural traditions, and political ideologies. How do these stand with respect to the solution? For example, does a climate of suspicion of high technology threaten to create political opposition to the solution? What kinds of social, cultural, or political problems could arise? Are these fixed or can they be altered through negotiation, education, or persuasion?

The Feasibility Tests focuses on situational constraints. How could these hinder the implementation of the solution? Should the solution be modified to ease implementation? Can the constraints be removed or remodeled by negotiation, compromise, or education? Can implementation be facilitated by modifying both the solution and changing the constraints?

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks