When it comes to decisions about COVID made under uncertainty, utility function basics, indifference curves and AI can help supply the answer.

John Blyler

November 22, 2020

6 Min Read
AdobeStock_96643976_770-400.jpeg
Adobe Stock, Decisions

The COVID-19 pandemic is a new experience for most of us, which is what makes it so dangerous. People tend to learn through experience and prefer a clear path between cause and effect. Ambiguities add, well, ambiguities that add significant uncertainties to our decisions with potentially lethal results.

Most people can see the cause and effect between the more obvious protections against COVID-19, namely, the washing of hands, wearing masks, and staying socially distant. Still, far too many otherwise reasonable people take unnecessary risks, even at the highest levels of the current government. The reason is that, at least in the U.S., the populace has gotten mixed signals and conflicting messages from their leaders. That adds to the existing uncertainty of the pandemic and distorts people’s perception of risk.

The pandemic has come with a number of challenges to those who need to make rational decisions, such as policymakers. At the early stages of the pandemic, there was a high degree of uncertainty in the scientific community with little quality evidence and plenty of disagreements among experts and models. In the early days, weeks, and even months of the pandemic, there was uncertainty about the basic characteristics of the virus, such as its transmissibility, severity, and natural history. This is one reason why there was much confusion as to the value of wearing masks in public, at least in the U.S.

Related:15 Takes on the Semi Future

Later during the pandemic, the problem changed to one of information overload. Given such uncertainty, no single model or explanation can be truly predictive in terms of an outbreak management plan. But making decisions under varying degrees of uncertainty is not a new problem. Indeed, systems engineering decisions are often made under uncertainty and risk.

According to NASA, uncertainty exists due to the inability to predict the future with certainty. Decisions that require an understanding of a future system state naturally have a risk as the future outcome of today’s actions is not always certain.

Decisions Under Uncertainty

There are three basic types of decision models: decision making under certainty, under uncertainty, and under risk. All models are based on the desired outcome, which may be objective or subjective. Objective outcomes can be express in quantitative measures, e.g., payoffs may be profits in dollars, costs (negative payoffs) expressed in dollars, yield express in-lbs, etc. Conversely, subjective outcomes are valued on a ranking scale, e.g., expressions of preference (good corporate image), higher quality outputs, etc.

Most decisions have a number of variables and possible actions, each of which could give rise to one or more possible outcomes with different probabilities of occurrence. The rational way to approach such decisions is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and then to calculate an "expected value.” This value is the average expectation for an outcome, i.e., one that gives rise to the highest total expected value.

Put another way, engineers often have to balance technical risks that contain multiple objectives, i.e., trading off between high performance, low power, reduced size or weight, and cost. However, decisions are easier when they can be compared on a single scale to provide an “apples to apples” comparison.

One approach is to convert these different objects to a common single scale, either monetary (dollars) or utility. It’s fairly easy to compare things especially business alternatives where the money is the natural measure. On the other, it is difficult to equate some things to money, like certain feature sets or human life.

In those cases, a utility function must be calculated. In economics, a utility function measures the individual preferences for goods or services beyond the explicit monetary value of those goods. Utility functions provide a way to transform diverse criteria into a common, dimensionless scale. In this way, one could compare diverse criteria such as cost (dollars), schedule (years), technical performance (widgets per year), and risk (high, medium, or low) in the same dimensionless units.

Utility functions are usually expressed as a number. But the numbers are different depending on the individual preferences of those making the calculations. It’s often more useful to plot various utility functions in an indifference curve that maps where each utility function – for example, various engineering design tradeoffs – corresponds to a higher level of utility. (Image Source: SilverStar at English Wikipedia, CC BY-SA 3)

Indifference-Curve.png

Indifference map showing three indifference curves.

Utility Example

Since the outbreak of COVID-19, physicians have had to screen patients to decide which should be admitted to an intensive care unit ICU (sickest), a hospital bed (very sick) or could be isolated at home (sick). Further complicating this decision is the acute lack of physical and material resources available to the physician. This screening problem could be addressed with a utility-based model.

Such a model could be used to generate which of the three alternatives is the best one for COVID-19 patients depending upon their level of symptoms and problems.

Another area where utility functions and related curves have found renewed use is in the development of artificial intelligence (AI) systems. As mentioned earlier, utility functions assign weights or values to certain actions that an individual, society, or AI-based system can use to make decisions.

Utility functions essentially assign weights or ideal utility values to certain actions that the AI system can take. The utility function can be used by the AI system to reach the desired outcome of its actions. How do AI systems learn how to implement utility functions and make good decisions?

AI systems are created by hardware and software designers. Axioms and algorithms are used to help the AI learn by training them with training data. This is the reason training of such systems is so important.

It is not inconceivable that AI-based utility functions modeling and related software programs will be refined to help deal with the growing complexity of future diseases and pandemics. Will that be enough to help change the questionable decisions made by ordinary people when uncertainty is involved? Only time will tell.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

 

 

About the Author(s)

John Blyler

John Blyler is a former Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an engineer and editor within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to RF design, system engineering and electronics for IEEE, Wiley, and Elsevier. John currently serves as a standard’s editor for Accellera-IEEE. He has been an affiliate professor at Portland State Univ and a lecturer at UC-Irvine.

Sign up for the Design News Daily newsletter.

You May Also Like