Jump to ratings and reviews
Rate this book

How to Measure Anything: Finding the Value of "Intangibles" in Business

Rate this book
Praise for How to Measure Finding the Value of Intangibles in Business

"I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Measure Anything provides just the tools most of us need to measure anything better, to gain that insight, to make progress, and to succeed."
-Peter Tippett, PhD, M.D.
Chief Technology Officer at CyberTrust
and inventor of the first antivirus software

"Doug Hubbard has provided an easy-to-read, demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions. We encourage our clients to try his powerful, practical techniques."
-Peter Schay
EVP and COO of
The Advisory Council

"As a reader you soon realize that actually everything can be measured while learning how to measure only what matters. This book cuts through conventional clichés and business rhetoric and offers practical steps to using measurements as a tool for better decision making. Hubbard bridges the gaps to make college statistics relevant and valuable for business decisions."
-Ray Gilbert
EVP Lucent

"This book is remarkable in its range of measurement applications and its clarity of style. A must-read for every professional who has ever exclaimed, 'Sure, that concept is important, but can we measure it?'"
-Dr. Jack Stenner
Cofounder and CEO of MetraMetrics, Inc.

304 pages, Hardcover

First published January 1, 1985

Loading interface...
Loading interface...

About the author

Douglas W. Hubbard

10 books73 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
1,148 (32%)
4 stars
1,318 (37%)
3 stars
751 (21%)
2 stars
217 (6%)
1 star
96 (2%)
Displaying 1 - 30 of 249 reviews
Profile Image for Takuro Ishikawa.
18 reviews10 followers
July 17, 2010
The most important thing I learned from this book: “A measurement is a set of observations that reduce uncertainty where the result is expressed as a quantity.” Finally! Someone has clearly explained that measurements are all approximations. Very often in social research, I have to spend a lot of time explaining that metrics don’t need to be exact to be useful and reliable. Hopefully, this book will help me shorten those conversations.
Profile Image for Yevgeniy Brikman.
Author 4 books657 followers
January 21, 2015
As an engineer, this book makes me happy. A great discussion of how to break *any* problem down into quantifiable metrics, how to figure out which of those metrics is valuable, and how to measure them. The book is fairly actionable, there is a complementary website with lots of handy excel tools, and there are plenty of examples to help you along. The only downside is that this is largely a stats book in disguise, so some parts are fairly dry and a the difficulty level jumps around a little bit. If you make important decisions, especially in business, this book is for you.

Some great quotes:

Anything can be measured. If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more
than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods.

Measurement: a quantitatively expressed reduction of uncertainty based on one or more observations.

So a measurement doesn’t have to eliminate uncertainty after all. A mere _reduction_ in uncertainty counts as a measurement and possibly can be worth much more than the cost of the measurement.

A problem well stated is a problem half solved.
—Charles Kettering (1876–1958)

The clarification chain is just a short series of connections that should bring us from thinking of something as an intangible to thinking of it as a tangible. First, we recognize that if X is something that we care about, then X, by definition, must be detectable in some way. How could we care about things like “quality,” “risk,” “security,” or “public image” if these things were totally undetectable, in any way, directly or indirectly? If we have reason to care about some unknown quantity, it is because we think it corresponds to desirable or undesirable results in some way. Second, if this thing is detectable, then it must be detectable in some amount. If you can observe a thing at all, you can observe more of it or less of it. Once we accept that much, the final step is perhaps the easiest. If we can observe it in some amount, then it must be measurable.

Rule of five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.

An important lesson comes from the origin of the word experiment. “Ex- periment” comes from the Latin ex-, meaning “of/from,” and periri, mean- ing “try/attempt.” It means, in other words, to get something by trying. The statistician David Moore, the 1998 president of the American Statistical Association, goes so far as to say: “If you don’t know what to measure, measure anyway. You’ll learn what to measure.”

Four useful measurement assumptions:
1. Your problem is not as unique as you think.
2. You have more data than you think.
3. You need less stated that you think.
4. And adequate amount of new data is more accessible than you think.

Don’t assume that the only way to reduce your uncertainty is to use an impractically sophisticated method. Are you trying to get published in a peer-reviewed journal, or are you just trying to reduce your uncertainty about a real-life business decision? Think of measurement as iterative. Start measuring it. You can always adjust the method based on initial findings.

In business cases, most of the variables have an "information value" at or near zero. But usually at least some variables have an information value that is so high that some deliberate measurement is easily justified.

While there are certainly variables that do not justify measurement, a persistent misconception is that unless a measurement meets an arbitrary standard (e.g., adequate for publication in an academic journal or meets generally accepted accounting standards), it has no value. This is a slight oversimplification, but what really makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong. Whether it meets some other standard is irrelevant.

When people say “You can prove anything with statistics,” they probably don’t really mean “statistics,” they just mean broadly the use of numbers (especially, for some reason, percentages). And they really don’t mean “anything” or “prove.” What they really mean is that “numbers can be used to confuse people, especially the gullible ones lacking basic skills with numbers.” With this, I completely agree but it is an entirely different claim.

The fact is that the preference for ignorance over even marginal reductions in ignorance is never the moral high ground. If decisions are made under a self-imposed state of higher uncertainty, policy makers (or even businesses like, say, airplane manufacturers) are betting on our lives with a higher chance of erroneous allocation of limited resources. In measurement, as in many other human endeavors, ignorance is not only wasteful but can be dangerous.

If we can’t identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value.

The lack of having an exact number is not the same as knowing nothing.

The McNamara Fallacy: The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t important. This is blindness. The fourth step is to say that what can’t easily be measured really doesn’t exist. This is suicide.

First, we know that the early part of any measurement usually is the high-value part. Don’t attempt a massive study to measure something if you have a lot of uncertainty about it now. Measure a little bit, remove some uncertainty, and evaluate what you have learned. Were you surprised? Is further measurement still necessary? Did what you learned in the beginning of the measurement give you some ideas about how to change the method? Iterative measurement gives you the most flexibility and the best bang for the buck.

This point might be disconcerting to some who would like more certainty in their world, but everything we know from “experience” is just a sample. We didn’t actually experience everything; we experienced some things and we extrapolated from there. That is all we get—fleeting glimpses of a mostly unobserved world from which we draw conclusions about all the stuff we didn’t see. Yet people seem to feel confident in the conclusions they draw from limited samples. The reason they feel this way is because experience tells them sampling often works. (Of course, that experience, too, is based on a sample.)

Anything you need to quantify can be measured in some way that is superior to not measuring it at all.
—Gilb’s Law
Profile Image for Jurgen Appelo.
Author 8 books911 followers
February 5, 2014
297 references to risk, and only 29 references to opportunity. No mention of unknown unknowns (or black swans), and no mention of the observer effect (goodhart's law). A great book, teaching you all about metrics, as long as you ignore complexity.
556 reviews148 followers
October 9, 2013
An OK popularization of measurement techniques. But it downplays the key issue—which is data quality challenges, of which there are at least two types.

The first is the "moneyball" type: a phenomenon where we know intuitively that there are important differences in measurable outcomes but we lack statistically significant explanations. The challenge here is to find things to measure that are consistently revealing of the phenomenon you are ultimately interested in measuring (say team wins). Making it harder is that sometimes you need to build a supercollider in order to measure the phenomenon in question, and for many reasons that may not always be feasible. Data collection is expensive, in many ways, not least socially: new forms of measurement of social activities (including business activities) threaten those who benefit from status quo.

The second data quality challenge is more insidious, the "deviant globalization" type: we have the data, or some data, but it is hopelessly and often intentionally corrupted or compromised, since there are actors who have an active interest in obscuring measurement. This is true about almost all information related to morally questionable activities, for example, from sex to drugs to theft. But it's not just there: any sales manager trying to accurately gauge the size of his reps' pipeline is intimate with the problem of trying to extract accurate data.

In sum, the book is fine on the technique side, but naive about what we may call the social epistemologies.
Profile Image for Martin Klubeck.
Author 16 books2 followers
July 10, 2013
I really like this book. Hubbard not only champions the belief that anything can be measured, he gives you the means (the understanding of how) to get it done. I have used his book on numerous occasions when tackling some difficult data collection efforts.

Hubbard's taxonomy and mine don't fully jive, but that's a minor point; I found much more to like than not. I like to highlight and make notes in good books...this book is full of both. I especially like one of his "useful measurement assumptions." I think it sums up the book nicely: "There is a useful measurement that is much simpler than you think."

This book helps you find the simple answer to the daunting problem of "how to measure" something.

Another section I like a lot is how to "calibrate estimates" - basically it gives really useful, hands-on techniques for getting better at guessing. This is a great tool, not only for measuring, but for any role that requires good estimating.

Nothing is perfect, and Hubbard has at least one chapter where I think he failed to simplify life - his chapter on measuring risk was too complicated (unless you are a statistician).

Bottom line? Great book - especially for those tasked with collecting the data necessary to measure stuff!
Profile Image for Marcelo Bahia.
86 reviews52 followers
September 4, 2018
An excellent read. It could be summed up as a "basic statistics for business" book, although it definitely goes beyond that in many aspects.

As the title suggests, throughout the whole book the author strongly defends the case that everything can be measured, even though the method may not be obvious at first glance. The book structure basically consists of the explanations of why this is so and various examples and methods that should help the reader to deal with many types of such problems.

Along the way, writing is very clear and reading is more pleasant than you would expect from a "statistics book". This is so because much of the value-added of the book comes not from the quantitative side (which is actually quite basic statistics, something that I see as positive in the context of the book), but from the qualitative analysis and differentiated viewpoint of the author under various circumstances. Actually, he seems knowledgeable and is pretty insightful most of the time, and I expect that the usefulness of each of these insights will depend on your current career and experience. Having worked as a financial analyst in the Brazilian financial markets for the past 8 years, for me the 2 most interesting insights were:

1) His definition of measurement as any number or figure that reduces risk compared to your previous state. I consider this REALLY important in the workplace, as most people consider valid measurements only those ones which can be precisely quantified, preferring ignorance over possible risk-reducing wide-range estimates in all other situations.

2) Due to the above misconception of the definition of measurement, people neglect measurements and estimates exactly in the situations in which they are more useful. When you don't know anything, any imprecise estimate will reduce risk and add value! Looking back, this non-obvious insight is precisely what we needed when facing some specific analytical and decision-making problems in my firm.

Overall, this is one of the most interesting books I've read in the past few months, and it should be a great investment of time & money to any professional that mildly deals with quantitative problems at work.
Profile Image for Chip Huyen.
Author 6 books3,424 followers
August 22, 2023
A well-written book. I find the first 1/3 and the last 1/3 helpful.
Profile Image for Bibhu Ashish.
131 reviews8 followers
November 17, 2014
Happened to read the book from IIBA.org site where I have been a member since last year. The best takeaway from the book is the structural thought process it brings in while dealing with intangibles which we always are demotivated to measure. To summarize my learning, I would just mention the below which I have copied from the book.
1-If it's really that important, it's something you can define. If it's something you think exists at all, it's something you've already observed somehow.
2-If it's something important and something uncertain, you have a cost of being wrong and a chance of being wrong.
3-You can quantify your current uncertainty with calibrated estimates.
4-You can compute the value of additional information by knowing the "threshold" of the measurement where it begins to make a difference compared to your existing uncertainty.
5-Once you know what it's worth to measure something, you can put the measurement effort in context and decide on the effort it should take.
6-Knowing just a few methods for random sampling, controlled experiments, or even merely improving on the judgments of experts can lead to a significant reduction in uncertainty.

One caution though. People who are not that fond of Mathematics and data may find it bit too much, but this book is worth reading at least once.
Profile Image for Alok Kejriwal.
Author 4 books590 followers
May 3, 2020
How to Measure Anything - Book Review.

A mentally challenging yet incredibly enlightening book.

What’s impressive about the content?

- The Art and Science of making guesses.

- The ability to use well thought through assumptions and estimate outcomes.

- Early examples of the book of legends such as Fermi who asked his students to estimate the number of piano tuners in Chicago (more like the questions you supposedly get asked in a google interview ?)

- Bayes Theorem and Bayesian thinking. It's NERDY but essential.

- Profound amazing examples of how you DON'T have to have too much data to analyse things.

- How to INVENT metrics. How the Cleaveland Orchestra started counting 'standing ovations' to measure the success of its new conductor.

- The importance of the Confidence Interval (CI).

- MONTE CARLO simulations!

- How Amazon introduced free wrapping to figure out how many books were gifts!

- Q's like: How would you measure the number of fishes in a lake?

This a MATH heavy book that takes a LONG time to read. If you don't like numbers & formulas (the book is FULL of them), I suggest you still buy the book and understand what you want.
Profile Image for iamKovy.
340 reviews16 followers
March 20, 2023
Это хорошая книга, для которой я немного глуповат, чтобы в полной мере использовать заложенные в ней знания.

Я малодушно ожидал пары-тройки десятков трюков с приблизительными эстимациями, шорткатами из разряда "как отнять 30, поделить на два и получить из Фаренгейта Цельсий" и так далее. И подобное даже присутствует в "Как измерить все что угодно» …пока Хаббард не рисует пентаграмму из статистической терминологии и не начинает призывать сатану в виде формул из Excel 2003. Прямо в тексте книги, да.

При этом и без формул книга дает полезную теоретическую базу и прямо говорит "не ссы, измерить действительно можно все что угодно". Главное понять что ты измеряешь, как это сделать эффективно, а не сложно и потом откалибровать результат. И самое главное - не будь к себе строг, ведь измерение - это в первую очередь не попытка взломать системный код матрицы, а просто уменьшение неопределенности. Настолько насколько это возможно из имеющихся данных и твоего инструментария. А хоть немного, но возможно всегда.

Итого, сложно, но интересно, и без графиков задаёт правильный настрой и подход к проблеме. Чувствуется местами, что книга писалась в эпоху других инструментов computing и доступа к ним, но методология как база не стареет. Так что не страшно.

Главное при чтении не будьте теми людьми из отзывов на Goodreads, которые жалуются что слушали ее в аудиоформате и было сложно. Ну и глазами не очень просто, а как графики "послушать" - для меня вообще тайна.
Profile Image for JJ Khodadadi.
435 reviews108 followers
January 28, 2021
چطور هرچیزی رو اندازه بگیریم!؟
برای مباحث تجارتی و البته موضوعات علمی مبحث خوبی هست و کمک می‌کنه بتونیم تغییرات رو اندازه گیری کنیم تا بتونیم با کمک اندازه گیری از پیشرفت یا پسرفت کار اطلاع دقیق کسب کنیم
Profile Image for Steve Walker.
230 reviews10 followers
February 23, 2013
There is a lot of good information here but it is more of a text book and very dry. I read this book because I have to make decisions every day. Some decisions are very easy because I have the intell and facts that make the decision for me. But other decisions aren't so easy. What are my "real" risks? How do I separate emotion from a decision? What about all the things involved that can't be measured?

Ah, that is where this book was insightful an helpful. Hubbard aserts that there isn't anything that can't be measured. Metrics. That is the key to making better decisions. The group I manage has a lot of dynamic and organic tasks to perform each day. I have never been able to quantify a lot of the work we do. That is because I am intrenched in scientific measurements such as average time to handle a customer call. That measurement is meaningless for me. Each call is a different subject. I cannot measure their performance based on how quickly they resolve a call because some problems are simple and others are complex and require enlisting other personnel.

But Hubbard teaches many techniques and alternate ways to look at things to get some way of quantifying; perhaps not precisely, but enough to help navigate the myriad pieces of information that can go into a business decision.

You have to "want" to read this book. But if you "want" to improve ROI; if you "want" to provide better risk analysis; "if you "want" to be more confident about providing management with your recommendations ... then you'll "want" to read this book.
Profile Image for Robert.
283 reviews
December 29, 2020
While analysing digital entertainment stocks over summer, I got stuck on the question of how to value a company's intellectual property. This initially seems like an incredibly difficult task – how can one quantitatively estimate the value of something so intangible? I was able to make progress by considering the following: if you own IP, the next time you want to produce a movie, you get to keep all of the profits rather than having to pay a portion to a licensor; this difference can be thought of as the cash flow profile of the IP.

How to Measure Anything is a definitive resource on questions of this nature. The key thesis of HTMA, as suggested in its title, is that any quantity of practical interest can be measured, where "measurement" means a reduction in uncertainty. Hubbard provides a general framework for approaching measurement tasks, including specific techniques and worked examples.

HTMA is a difficult book to review; it has too many case studies and anecdotes for it to be a textbook but goes into much more detail than a typical nonfiction book. For example, rather than simply stating that Bayes theorem is the appropriate framework for thinking about many measurement problems, Hubbard actually provides step-by-step walkthroughs of the calculations and discusses how to implement them in excel.

I think HTMA is an excellent book for the right audience – I would broadly characterise this audience as practitioners/students of "management science", e.g managers who are facing difficult business questions that look unquantifiable, or mathematicians/physicists who want to learn about business.

For a general audience interested in rationality and decision making, I would suggest starting with Superforecasting (many of the concepts are similar). HTMA still has plenty of value, but it is easy to get bogged down by the walkthroughs of the actual calculations (I would have preferred it relegated to the appendix, but I'm not the target audience).

The short case studies at the end might be an interesting place to start: if you are as impressed with them as I was, you can dig into the rest of the book for the nuts and bolts. One of them involves the valuation of industry standards; it's cool to see how something so intangible can be quantitatively measured.

Some key points:

- Everything of practical importance is measurable, else it could not be practical by definition.
- Uncertainty-reduction has diminishing returns. When you are very uncertain about something, even a tiny amount of data can massively reduce uncertainty, but to get high levels of precision, you need a lot of data.
- Correlation does not imply causation, but it does provide evidence for causation (via Bayes theorem).
- It is worth thinking about the meta-question: determining the value of knowing an answer to the question, rather than just valuing the answer to the question. This can tell you how much time/money you should allocate to finding an answer.
- Applied Information Economics is a rational framework for approaching business cases: focus on the areas that are most uncertain and consider the value of measurements before designing a quantitative strategy for uncertainty-reduction.
Profile Image for Ang Li-Lian.
Author 1 book35 followers
October 28, 2023
I find this book empowering at getting you to think about making data driven decisions in creative ways. What are you really trying to figure out? Why do you want to know? I appreciated the section of the book for resetting my brain into looking at all decisions as quantifiable even if it isn't perfect!

Measurement is about reducing uncertainty, rather than finding an accurate answer!!

The specific methods are good for getting an overview of what you could do but I think the most value was the first sections talking about the figuring out the right problem.
Profile Image for Shayan.
22 reviews1 follower
September 29, 2021
کتاب فوق العاده خوب با ترجمه متوسط
کتاب مفاهیم فوق العاده ای در مورد اصل اندازه گیری بیان می کنه ،
راه حل های کاربردی زیادی برای ساده سازی مسایل و تحلیل کمی اونها میده ، اگر کار با اکسل رو بلد باشید جذابیت کتاب براتون چند برابر میشه ،
کتاب روشی داره برای کالیبره سازی ذهن که اگر تمرین هاش درست انجام بشه بسیار در زندگی روزمره کاربرد زیادی داره
در کل به نظرم ب��ای اندازه گیری هر مشکلی در کسب و کارتون نیاز به خوندن این کتاب دارید
Profile Image for Mahdi Majidzadeh.
31 reviews10 followers
August 6, 2021
متاسفانه کتاب همگنی نبود
هم در سطح عمومی مطالبی رو بیان کرده بود و هم در سطح بسیار تخصصی در حد فرمول های اکسل
هم یه جاهایی چگالی محتواش کم می‌شد و هم یه جاهایی به شدت زیاد
همین باعث شده کتلب خیلی حجیم بشه و سخت بشه باهاش ارتباط گرفت
Profile Image for La La Lena.
142 reviews2 followers
August 2, 2023
Я би радила цю книжку всім CEO та і просто всім, хто хоче усвідомити той факт, що можна спостерігати, можна і виміряти.

Не можливо вимірити лише лють українців до русні. Тут жодні прилади не допоможуть.
Чорт забирай, якщо Мета банить за русофобію, я можу проявляти свою русофобію в кожному гудрідз рев'ю ? 🤌🤞🌚
Profile Image for Nathan.
158 reviews7 followers
November 23, 2019
This is a dense book. It took me several months to get through it, but that was partially because after the refresher on Bayesian Statistics I started reading another textbook on that.

If you like math and numbers and analysis and have to make decisions, you'll get some useful information from this book. I built my first Monte Carlo model while walking through this.

For years I've been asking friends "How confident are you?" when they give me a binary answer. Eg: Q: Will this be done by Friday? A: Yes Q: How confident are you? A: 50%

After reading this I've taken away the idea of always asking people for a 90% confidence interval. I think one of the most useful (and fun) parts of this book is about the calibration exercise. If you're asked 10 questions and told to provide a 90% confidence interval of where the true answer is then you should get 9 out of the 10 correct. I didn't on my first try, and most people are terrible at it. But apply money to the mix, and people instantly improve. This tip was immediately used in the next model I built :)

Here are my notes:
Although this may seem a paradox, all exact science is based on the idea of approximation. If a man tells you he knows a thing exactly, then you can be safe in inferring that you are speaking to an inexact man. —Bertrand Russell (1873–1970), British mathematician and philosopher

Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations.

A mere reduction, not necessarily elimination, of uncertainty will suffice for a measurement.

Not only does a true measurement not need to be infinitely precise to be considered a measurement, but the lack of reported error—implying the number is exact—can be an indication that empirical methods, such as sampling and experiments, were not used (i.e., it’s not really a measurement at all).

the key lesson is that measurements are more than you knew before about something that matters.

A problem well stated is a problem half solved. —Charles Kettering (1876–1958), American inventor, holder of 300 patents, including electrical ignition for automobiles

There is no greater impediment to the advancement of knowledge than the ambiguity of words. —Thomas Reid (1710–1769), Scottish philosopher

If someone asks how to measure “strategic alignment” or “flexibility” or “customer satisfaction,” I simply ask: “What do you mean, exactly?” It is interesting how often people further refine their use of the term in a way that almost answers the measurement question by itself.

Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.

The only valid reason to say that a measurement shouldn’t be made is that the cost of the measurement exceeds its benefits.

Usually, Only a Few Things Matter—But They Usually Matter a Lot

In most business cases, most of the variables have an “information value” at or near zero. But usually at least some variables have an information value that is so high that some deliberate measurement effort is easily justified.

what makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong.

Ignorance is never better than knowledge. —Enrico Fermi, winner of the 1938 Nobel Prize for Physics

Four Useful Measurement Assumptions: It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think.

the first few observations are usually the highest payback in uncertainty reduction for a given amount of effort. In fact, it is a common misconception that the higher your uncertainty, the more data you need to significantly reduce it. Again, when you know next to nothing, you don’t need much additional data to tell you something you didn’t know before.

A decision has two or more realistic alternatives.

merely decomposing highly uncertain estimates provides a huge improvement to estimates.

As the great statistician George Box put it, “Essentially, all models are wrong, but some are useful.”

the subjective estimates of some persons are demonstrably—measurably—better than those of others.

the ability of a person to assess odds can be calibrated—just like any scientific instrument is calibrated to ensure it gives proper readings.

assessing uncertainty is a general skill that can be taught with a measurable improvement.

we are simply not wired to doubt our own proclamations once we make them.

I also asked experts who are providing range estimates to look at each bound on the range as a separate “binary” question. A 90% CI means there is a 5% chance the true value could be greater than the upper bound and a 5% chance it could be less than the lower bound. This means that estimators must be 95% sure that the true value is less than the upper bound. If they are not that certain, they should increase the upper bound until they are 95% certain.

I sometimes call this the “absurdity test.” It reframes the question from “What do I think this value could be?” to “What values do I know to be ridiculous?” We look for answers that are obviously absurd and then eliminate them until we get to answers that are still unlikely but not entirely implausible. This is the edge of our knowledge about that quantity.

Assumptions about quantities are necessary if you have to use deterministic accounting methods with exact points as values. You could never know an exact point with certainty so any such value must be an assumption. But if you are allowed to model your uncertainty with ranges and probabilities, you do not have to state something you don’t know for a fact. If you are uncertain, your ranges and assigned probabilities should reflect that. If you have “no idea” that a narrow range is correct, you simply widen it until it reflects what you do know—with 90% confidence.

When it comes to assessing your own uncertainty, you are the world’s leading expert.

Once calibrated, you are a changed person. You have a keen sense of your level of uncertainty.”

It is better to be approximately right than to be precisely wrong. —Warren Buffett

It is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible. —Aristotle

For most problems in statistics and measurement, we are asking, “What is the chance the truth is X, given what I’ve seen?” Again, it’s actually often easier to answer the question, “If the truth was X, what was the chance of seeing what I did?” Bayesian inversion allows us to answer the first question by answering the second, easier question.

When we examine our own behaviors closely, it’s easy to see that only a hypocrite says “Life is priceless.”

any fair researcher should always be able to say that sufficient empirical evidence would change their mind.

If it’s really that important, it’s something you can define. If it’s something you think exists at all, it’s something you’ve already observed somehow.

If it’s something important and something uncertain, you have a cost of being wrong and a chance of being wrong.




Profile Image for Kirill.
75 reviews13 followers
October 6, 2021
This book is simple and complex at the same time.
Most importantly, if outlines the measurement framework. The approach of collecting only the data that will be used to make a business decision. Basically pretty similar to what was summarised in one of the chapters in Software Engineering at Google: Lessons Learned from Programming Over Time. The idea is so crystal clear and straightforward: define decision to be made and collect the data that would help to make this decision. And still - my everyday reality is to see that the data are collected just because it "sounds reasonable" while decision are being made because "it feels right" without looking at the value of information. How such simple things can become so intransparent and hard in the real life!

Hubbard goes much into details to explain the measurement approach. Carefully going through the book helped me a lot to work on my own argumentation to help people I am working with by making good data-driven decisions.

Statistical methods build a solid part of the book. Not everything explained here in an easy way. And in any case if you are not applying stats on a daily basis there is a high probability to forget the math after few weeks. It does make sense to have an overview of statistical instruments in mind and look up details somewhere (not necessarily in this book) if you decide to apply those.
Profile Image for Jon.
74 reviews4 followers
September 30, 2013
Simply put the first half of this is just awesome. As I listened to this via audio the second half is plagued by many formulas that doesn’t translate or understood well when listened to. The second half is also very heavily into statistics which could be a somewhat laborious read for some.

The first half is very recommended as it goes into what it means to “measure” something and suggest some very fundamental questions regarding measuring. E.g.:

What is it you want to have measured? E.g. what does security mean for you?
Why is this important for you?
How much is this measurement worth to you?
What do you know now about the problem now?

Hubbard gives tools for solving problem e.g. the Fermi and the baysian toolbox that allows a rough estimation of practically anything. Hubbard also gives some very good pointers as to how you calibrate yourself to counteract psychological biases.
If you read it, make sure you dedicate a good amount of time on the first half as imo, this is where most of the loot is located.
Profile Image for Karen.
627 reviews1 follower
February 3, 2021
This book is a *must read* for anyone who needs to compile coherent, useful business cases in scenarios where "intangibles" exist. As the author notes, it's not about compiling exact data but rather reducing uncertainty (and thus risk). I found the concepts and approaches outlined by this book to be very useful. While not a whispersync title, I bought both the kindle version and the audible.com version -- while it's nice to have the concepts available for visual review, I found the audible.com version to be more compelling and interesting.
Highly recommended.
Profile Image for Rick Howard.
Author 3 books31 followers
July 10, 2017
Douglas Hubbard’s "How to Measure Anything: Finding the Value of "Intangibles" is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of probability is called Bayesian and it has been out of favor within the statistical community until just recently when it became obvious that it worked for a certain set of really hard problems. He describes a few simple math tricks that all network defenders can use to make predictions about risk decisions for our organizations. He even demonstrates how easy it is for network defenders to run our own Monte Carlo simulations using nothing more than a spreadsheet. Because of all of that, "How to Measure Anything: Finding the Value of "Intangibles" is indeed a Cybersecurity Canon Hall of Fame candidate and you should have read it by now.

Introduction

The Cybersecurity Canon project is a “curated list of must-read books for all cybersecurity practitioners – be they from industry, government or academia — where the content is timeless, genuinely represents an aspect of the community that is true and precise, reflects the highest quality and, if not read, will leave a hole in the cybersecurity professional’s education that will make the practitioner incomplete.” [1]

This year, the Canon review committee inducted this book into the Canon Hall of Fame: “How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen. [2] [3]

According to the Canon committee member reviewer, Steve Winterfeld, "How to Measure Anything in Cybersecurity Risk” is an extension of Hubbard’s successful first book, “How to Measure Anything: Finding the Value of “Intangibles” in Business. It lays out why statistical models beat expertise every time. It is a book anyone who is responsible for measuring risk, developing metrics, or determining return on investment should read. It provides a strong foundation in qualitative analytics with practical application guidance." [4]

I personally believe that precision risk assessment is a key and currently missing element in the CISO’s bag of tricks. As a community, network defenders in general are not good at transforming technical risk into business risk for the senior leadership team. For my entire career, I have gotten away with listing the 100+ security weaknesses within my purview and giving them a red, yellow, or green labels to mean bad, kind-of-bad, or not bad. If any of my bosses would have bothered to ask me why I gave one weakness a red label vs a green label, I would have said something like: “25 years of experience, Blah, Blah, Blah, Trust Me, Blah, Blah, Blah, can I have the money please?”

I believe the network defender’s inability to translate technical risk into business risk with any precision is the reason that the CISO is not considered at the same level as other senior C-Suite executives like the CEO, the CFO, the CTO and the CMO. Most of those leaders have no idea what the CISO is talking about. For years, network defenders have blamed these senior leaders for not being smart enough to understand the significance of the security weaknesses we bring to them. But I assert that it is the other way around. The network defenders have not been smart enough to convey the technical risks to business leaders in a way they might understand.

This CISO inability is the reason that the Canon Committee inducted "How to Measure Anything in Cybersecurity Risk,” and another precision risk book called “Measuring and Managing Information Risk: A FAIR Approach” into the Canon Hall of Fame. [5][4][3][6] [7]. These books are the places to start if you want to educate yourself on this new way of thinking about risk to the business.

For me though, this is not an easy subject. I slogged my way through both of these books because basic statistical models completely baffle me. I took stat courses in college and grad school but sneaked through them by the skin of my teeth. All I remember about stats was that it was hard. When I read these two books, I think I only understood about a three-quarters of what I was reading not because they were written badly but because I struggled with the material. I decided to get back to the basics and read Hubbard’s original book that Winterfeld referenced in his review: “How to Measure Anything: Finding the Value of “Intangibles” in Business” to see if it was also Canon worthy.


The Network Defender’s misunderstanding of Metrics, Risk Reduction and Probabilities


Throughout the book, Hubbard emphasizes that seemingly dense and complicated risk questions are not as hard to measure as you might think. He reasons from scholars like Edward Lee Thorndike and Paul Meehl from the early twentieth-century about Clarification Chains:

If it matters at all, it is detectable/observable.
If it is detectable, it can be detected as an amount (or range of possible amounts).
If it can be detected as a range of possible amounts, it can be measured. [8]

As a network defender, whenever I think about capturing metrics that will inform how well my security program is doing, my head begins to hurt. Oh, there are many things that we could collect – like outside IP addresses hitting my infrastructure, security control logs, employee network behavior, time to detect malicious behavior, time to eradicate malicious behavior, how many people must react to new detections, etc. – but it is difficult to see how that collection of potential badness demonstrates that I am reducing material risk to my business with any precision. Most network defenders in the past, including me, have simply thrown our hands up in surrender. We seem to say to ourselves that if we can’t know something with 100% accuracy or if there are countless intangible variables with many veracity problems, then it is impossible to make any kind of accurate prediction about the success or failure of our programs.

Hubbard makes the point that we are not looking for 100% accuracy. What we are really looking for is a reduction in uncertainty. He says that the concept of measurement is not the elimination of uncertainty but the abatement of it. If we can collect a metric that helps us reduce that uncertainty, even if it is just by a little bit, then we have improved our situation from not knowing anything to knowing something. He says that you can learn something from measuring with very small random samples of a very large population. You can measure the size of a mostly unseen population. You can measure even when you have many, sometimes unknown, variables. You can measure the risk of rare events. Finally, Hubbard says that you can measure the value of subjective preferences like art or free time or life in general.

According to Hubbard, “We quantify this initial uncertainty and the change in uncertainty from observations by using probabilities.” [8] These probabilities refer to our uncertainty state about a specific question. The math trick that we all need to understand is allowing for ranges of possibilities that we are 90% sure the true value lies between.

For example, we may be trying to reduce the number of humans that have to respond to a cyberattack. In this fictitious example, last year the Incident Response Team handled 100 incidents with three people each; a total of 300 people. We think that installing a next generation firewall will reduce that number. We don’t know exactly how many but some. We start here to bracket the question.

Do we think that installing the firewall will eliminate the need for all humans to respond? Absolutely not. What about reducing the number to three incidents with three people for a total of nine. Maybe. What about reducing the number to 10 incidents with three people for a total of 30. That might be possible. That is our lower limit.

Let’s go to the high side. Do you think that installing the firewall will have zero impact in reducing the number? No. What about 90 attacks with three people for a total of 270? Maybe. What about 85 attacks with three people for a total of 255? That seems reasonable. That is our upper limit.

By doing this bracketing we can say that we are 90% sure that installing the next generation firewall will reduce the number of humans that have to respond to cyber incidents from 300 to between 30 and 255. Astute network defenders will point out that this range is pretty wide. How is that helpful? Hubbard says that first, you now know this where before you didn’t know anything. Second, this is the start. You can now collect other metrics perhaps that night help you reduce the gap.

The History of Scientific Measurement Evolution

This particular view of probabilities, the idea that there is a range of outcomes that you can be 90% sure about, is the Bayesian interpretation of probabilities. Interestingly, this different view of statistics has not been in favor since its inception when Thomas Bayes penned the original formula back in the 1740s. The naysayers originated from the Frequentists. Their theory said that the probability of an event can only be determined by how many times it has happened in the past. To them, modern science requires both objectivity and precise answers. According to Hubbard,

“The term ‘statistics’ was introduced by the philosopher, economist, and legal expert Gottfried Achenwall in 1749. He derived the word from the Latin statisticum, meaning ‘pertaining to the state.’ Statistics was literally the quantitative study of the state.” [8]


In the Frequentist view, the Bayesian philosophy requires a measure of “belief and approximations. It is subjectivity run amok, ignorance coined into science.” [7] But the real world has problems where the data is scant. Leaders worry about potential events that have never happened before. Bayesians were able to provide real answers to these kinds problems like the defeating of the Enigma encryption machine in World War II and finding a lost and sunken nuclear submarine that was the basis for the movie “Hunt for Red October.” But It wasn’t until the early 1990s when the theory became commonly accepted. [7]

Hubbard walks the reader through this historical research about the current state in scientific measurement. He explains how Paul Meehl in the early 1900s demonstrated time and again that statistical models outperformed human experts. He describes the birth of Information Theory with Claude Shannon in the late 1940s and credits Stanley Smith Stevens around the same time with crystalizing different scales of measurement from sets, to ordinals, to ratios and to intervals. He reports how Amos Tversky and Daniel Kahneman, through their research in the 1960s and 1970s, demonstrated that we can improve our measurements around subjective probabilities.

In the end, Hubbard defines measurement as this

Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. [8]



Simple Math Tricks

Hubbard explains two math tricks that, after reading, seem impossible to be true, but when used by a Bayesian proponents, greatly simplify measurement-taking for difficult problems.

The Power of Small Samples: The Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. [8]

The Single Sample Majority Rule (i.e., The Urn of Mystery Rule):
Given maximum uncertainty about a population proportion—such that you believe the proportion could be anything between 0% and 100% with all values being equally likely—there is a 75% chance that a single randomly selected sample is from the majority of the population. [8]

I admit that the math behind these rules escapes me. But I don’t have to understand the math to use the tools. It reminds me of a moving scene from one of my favorite movies: “Lincoln.” President Lincoln, played brilliantly by Daniel Day-Lewis, discusses his reasoning for keeping the southern agents, who want to discuss peace before the 13th Amendment is passed, away from Washington.

"Euclid's first common notion is this. Things that are equal to the same thing are equal to each other. That's a rule of mathematical reasoning. It's true because it works. Has done and always will do.” [9]

The bottom line is that statistically significant does not mean a large number of samples. Hubbard says that statistical significance has a precise mathematical meaning that most lay people do not understand and many scientists get wrong most of the time. For the purposes of risk reduction, stick to the idea of a 90% confidence interval regarding potential outcomes. The Power of Small Samples and the Single Sample Majority Rule are rules of mathematical reasoning that all network defenders should keep handy in their utility belts as they measure risk in their organizations.


Simple Measurement Best Practices and Definitions

As I said before, most network defenders think that measuring risk in terms of cyber security is too hard. Hubbard explains four rules of thumb that every practitioner should consider before they give up:

It’s been measured before.
You have far more data than you think.
You need far less data than you think.
Useful, new observations are more accessible than you think. [8]

He then defines “uncertainty” and “risk” through a possibility and probabilistic lens:


Uncertainty:
The lack of complete certainty, that is, the existence of more than one possibility.

Measurement of Uncertainty:
A set of probabilities assigned to a set of possibilities.

Risk:
A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.

Measurement of Risk:
A set of possibilities each with quantified probabilities and quantified losses. [8]


In the network defender world, we tend to define risk in terms of threats and vulnerabilities and consequences. [10] Hubbard’s relatively new take gives us a much more precise way to think about these terms.



Monte Carlo Simulations

According to Hubbard, the invention of the computer made it possible for scientists to run
thousands of experimental trials based on probabilities for inputs. These trials are called Monte Carlo simulations. In the 1930s, Enrico Fermi used the method to calculate neutron diffusion by hand with human mathematicians calculating the probabilities. In the 1940s, Stanislaw Ulam, John von Neumann, and Nicholas Metropolis realized that the computer could automate the Monte Carlo method and help them design the atomic and hydrogen bombs. Today, everybody that has access to a spreadsheet can run their own Monte Carlo simulations.

For example, if you take my previous example of trying to reduce the number of humans that have to respond to a cyberattack. We said that during the previous year, 300 people responded to a cyberattack. We said that we were 90% certain that the installation of a next generation firewall would result in a reduction of the humans that have to respond to an incident to between 30 and 255 humans.

We can refine that number even more by simulating hundreds or even thousands of scenarios inside a spreadsheet. I did this myself by setting up 100 scenarios where I randomly picked a number between 0 and 300. I calculated the mean to be 131 and the standard deviation to be 64. Remember that the standard deviation is nothing more than a measure of spread from the mean. [11][12][13] The rule of 68–95–99.7 says that 68% of the recorded values will fall within the first standard deviation. 95% will fall within the second standard deviation. 97.7% will fall within the third standard deviation. [8] With our original estimate, we said there was a 90% chance that the number is between 30 and 255. After running the Monte Carlo simulation, we can say that there is 68% chance that the number is between 76 and 248.

How about that? Even a statistical luddite can run his own Monte Carlo simulation.

Conclusion

After reading Hubbard’s second book in the series, “How to Measure Anything in Cybersecurity Risk," I decided to go back to the original to see if I could understand with a bit more clarity exactly how the statistical models worked and to determine if the original was Canon worthy too. I learned that there was probably a way to collect data to support risk decisions for even the hardest kinds of questions. I learned that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. I learned that this particular view of probability is called Bayesian and it has been out of favor within the statistical community until just recently when it became obvious that it worked for a certain set of really hard problems. I learned that there are a few simple math tricks that we can all use to make predictions about these really hard problems that will help us make risk decisions for our organizations. And I even learned how to build my own Monte Carlo simulations to supports those efforts. Because of all of that, "How to Measure Anything: Finding the Value of "Intangibles" is indeed Canon worthy and you should have read it by now.


Sources

[1] "Cybersecurity Canon: Essential Reading for the Security Professional," by Palo Alto Networks, Last Viewed 5 July 2017,
https://www.paloaltonetworks.com/thre...

[2] "Cybersecurity Canon: 2017 Award Winners," by Palo Alto Networks, Last Visited 5 July 2017,
https://cybercanon.paloaltonetworks.c...

[3] " 'How To Measure Anything in Cybersecurity Risk' - Cybersecurity Canon 2017," Video Interview by Palo Alto Networks, Interviewer: Canon Committee Member, Bob Clark, Interviewees Douglas W. Hubbard and Richard Seiersen, 7 June 2017, Last Visited 5 July 2017,
https://www.youtube.com/watch?v=2o_mA...

[4] "The Cybersecurity Canon: How to Measure Anything in Cybersecurity Risk," Book review by Canon Committee Member, Steve Winterfeld, 2 December 2016, Last Visited 5 July 2017,
https://cybercanon.paloaltonetworks.com/

[5] "How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen, Published by Wiley, April 25th 2016, Last Visited 5 July 2017,
https://www.goodreads.com/book/show/2...

[6] "The Cybersecurity Canon: Measuring and Managing Information Risk: A FAIR Approach," Book review by Canon Committee Member, Ben Rothke, 10 September 2015, Last Visited 5 July 2017,
https://researchcenter.paloaltonetwor...

[7] "Sharon Bertsch McGrayne: 'The Theory That Would Not Die' | Talks at Google," by Sharon Bertsch McGrayne, Google, 23 August 2011, Last Visited 7 July 2017,
https://www.youtube.com/watch?v=8oD6e...

[8] "How to Measure Anything: Finding the Value of "Intangibles" in Business," by Douglas W. Hubbard, Published by John Wiley & Sons, 1985, Last Visited 10 July 2017,
https://www.goodreads.com/book/show/4...

[9] "Lincoln talks about Euclid," by Alexandre Borovik, The De Morgan Forum, 20 December 2012, Last Visited 10 July 2017,
http://education.lms.ac.uk/2012/12/li...

[10] BITSIGHT SECURITY RATINGS BLOG," by MELISSA STEVENS, 10 JANUARY 2017, Last Visited 10 July 2017,
https://www.bitsighttech.com/blog/cyb...

[11] "Standard Deviation - Explained and Visualized," by Jeremy Jones, YouTube, 5 April 2015, Last Visited 9 July 2017,
https://www.youtube.c
Profile Image for Vlad Ardelean.
147 reviews29 followers
July 2, 2019
Oh boy, I've been waiting a long time to review this one. I'll start with the good parts, as they're few and far between. I've also posted this review directly from the kindle app twice already, and it doesn't show up, so this is my 3rd attempt to post a review for this book:


The good parts:
I learnt how to measure the population of fish in a lake. That's quite cool! I will not give a spoiler here, enough to say that it involves catching and tagging the fish.

Then I learnt a few statistics factlets. For instance, in a normal distribution, 90% of the measurements will fit in the interval of +- 1.645 standard deviations (3.29 sigmas). I also learned how I can get ~95% confidence that if I ask 5 random people how long it takes them to get to work, the population median will be between the maximum and minimum of those 5 values...regardless of the size of the population. These are just statistical truths, no debate there.

I also learnt about Emily Rosa who debunked the claims of "touch healing" therapists regarding them being able to detect auras... spoiler: they couldn't do it, or at least couldn't show they're better than tossing a coin. I learned about how Enrico Fermi was really good at estimation problems using just his available knowledge. I learnt about Eratosthenes, which estimated the radius of the Earth with quite high accuracy! It was fun.

Other nice things in the book were mentions of the Rausch and Lens decision models, and Monte Carlo simulations for assisting in decisions. Then Daniel Kahneman (and some other ppl) are mentioned for contributions to psychology whereby they show consistent flaws in human thinking (we're very bad at estimating extremely rare events)

There's some talk about Bayesian statistics compared to the "frequentist" interpretation.

Another thing that surprised me was that the author talks at length about these magical people called "calibrated estimation experts". Apparently (and there's literature with more evidence for this to show), you can train yourself to give answers AND then the probability of the answer being right. For instance, I don't know when Napoleon was born, but I can say with 90% certainty that it was between 1750 and 1850. Apparently, you can train yourself to become very good at providing that probability.

The author then provides a few tricks on how to better give a probability for "guessing" answers.

This sums up the good parts of the book. I have not provided more details here, but rest assure you won't find much more details than this in the book.

The bad parts:
The author bashes and mocks people so much, it's unreal. He especially has a deep hate towards managers. Here's some "statistical" evidence: I counted the number of times the author wrote the word "managers" in the book. It's 79. Here are a few quotes, and they go on and on and on ...and on:
"I heard managers say that since each product is unique, they cannot extrapolate..."
"I have known managers who simply presume the superiority of their intuition..."
"...it simply won't occur to many managers that an "intangible" can be measured"
"...her examples prove what can be done by most managers if they tried"
"...Other managers might object: "there is no way to measure that thing without spending millions" "
"Once managers figure out what they mean and why it matters, the issues in questions starts to look a lot more measurable"
"Business managers need to realize that some things seem intangible only because they just haven't defined what they are talking about"
"The problem is that when managers make choices about whether the bother to do a random sample in the first place, they are making the judgments intuitively..."
"But it has some significant advantages over much of the current measurement-stalemate thinking of some managers"

Maybe not all mentions of the word "managers" have a directly bad connotation, but I'm quite sure none of those mentions put managers in a good light.

There's more!
The author uses another formula to mock people, and that is "those who...". I searched for usages of that formula -> 45. I won't quote, but I hope you got the idea.

More bad things. Remember when I wrote about Emily Rosa and her debunking of supernatural powers? The author has an interesting fascination with coming back to her example. He does this 97 times in fact! With 410 pages, that gives a mention about Emily every 4.23 pages. Enrico Fermi and Eratosthenes get less attention with only 52 and 37 mentions throughout the book. Still, I think it's fair to say that repetition is an issue with this book. To top it off, the author has the arrogance to claim that with a book such as this one, Eratosthenes, Emily Rosa and Enrico Fermi would probably have been able to do a lot more.

More bad things: The author claims that there are plenty of statistics books, and this is not one of them. He advertises his book as providing general ideas applicable everywhere. Among those ideas are things like "measurements help in making a decision", "there's always more information than you think you have", "you always need less information than you think you do", "measure the things that are most important" and "take into account if the price of the measurement is lower than the cost of the decision".
Am I alone in thinking that these ideas are so trivial that a book about them is not really valuable? Also, since he's talking about decisions, he never mentions the time aspect of a measurement, just the price. You'd think he might maybe consider that, but nope!

I don't understand who the target audience for this book is. Is it the "managers" the author continuously mocks? Not likely. Is it people who want to learn how to measure? Likely not also, because this book doesn't really teach any measurement techniques, it just mentions 3 decision making models which he barely explains.

Even more bad: The author introduces the terms that I talk about in the "good" section. That's all he does, he "introduces" them. I did learn statistics while reading this book, but it's because I spent a lot of time on Wikipedia. The author doesn't try to rigorously explain these concepts. At most, you get from him recipes like this:
1. Note down the numbers you get from doing X
2. Take the average of those numbers
3. Subtract the average from each number
4. Multiply the difference by 1.645
5. etc etc
(This is not an example from the book. This is just my impersonation of the author's examples. They are hard to follow on kindle. There are not enough explanations, and then you're just left with a recipe)

Next to "not explaining complex concepts", the author also over-explains simple ones, again in a very repetitive fashion. There are a lot of unnecessary explanations regarding very simple graph. There's one graph illustrating the price of measurement versus the value of information. The price of measurement rises slowly at first, and increases fast when the amount of information approaches perfect information. The value of information is the opposite. It rises very much at first, but then only very slowly towards the maximum amount of information. I'm not sure how much time the author spends on this, but I did have the feeling that it's ridiculous, so I'm reporting on the incident. It's not the only incident like this.

The ugly:
This part is my personal interpretation of the author's intent, based on the book content. The author seems to emphasize quite a lot that he has a company who offers calibration training to people. Therefore I think, and so it seems to me, that at least in part, the motivation for this book was to self-advertise. This would be fair if stated up-front. It was not stated upfront though. The author might also have been using the "statistical" fact that one can charge more for longer books.
Clever, but I'm asking for my money back on this one.

DO NOT READ THIS BOOK! IT'S TOO LONG AND REPETITIVE TO BE A GOOD INTRODUCTORY BOOK, AND CONTAINS FAR TOO LITTLE INFORMATION FOR IT TO BE ANYTHING ELSE.
Profile Image for Andy.
1,600 reviews523 followers
February 20, 2023
This is not a good book for audio because of all the mathematical equations.

I was troubled by the “subjectivist” definition of probability for personal confidence / certainty estimates. This is the approach used by so-called Super Forecasters who wind up causing Super Disasters. In a science paper, a 95% Confidence Interval for an odds ratio or whatever does not mean a subjective guess of the range. It’s a description of objective reality of findings in that study. To be fair to the author, he is clear that not everyone shares his philosophy of what probabilities mean. Moreover, he does clarify in Ch 6: “This feeling (confidence) should not be confused with evidence of effectiveness. … It is very possible to experience an increase in confidence about decisions and forecasts without actually improving things—or even making them worse."

Thanks, but then what's the point?

The book starts with good principles, like: Define objective results for what matters to be measured; and Begin with what you know you know. Unfortunately, the author doesn't seem to eat his own cooking. If he can teach better decision making in business, then why not measure that with Wall Street investors or whatnot? Show me the money. Instead, we get typical consultant guru excuses for why others can’t replicate his findings--even for his big deal intervention of showing overconfident people that they are overconfident.

There are plenty of books about statistics for different audiences that cover the basic facts included here, but this doesn't seem like the place to start for that.

Of possible interest:
How to Lie with Statistics
How to Lie with Statistics by Darrell Huff
Profile Image for Paulo Saraiva.
13 reviews1 follower
May 23, 2020
To put it simple: the best book I ever read about risk management.

If you want great and practical insights about what you need to measure when it comes to problem solving or decision making, this is a masterpiece. Here you will find a lot of Mathematical tools that are extremely useful in clarifying situations in which we use to think that there is no ways to perform objective measurement, specifically about what we usually call "intangibles".

Even when it comes to psychology of decision making, Hubbard proposes pragmatic ways to translate some of the most acknowledged theories and models into Mathematical language. The chapters covering the Lens and Rasch models are particularly remarkable.

A modern and combative stance against subjectivity that permeates most risk management tools that are widely used in organizations.
Profile Image for Lawrence Peirson.
62 reviews4 followers
Read
June 23, 2022
I didn't get to finish this because my backpack was stolen from a car in San Francisco with it inside. What I did get to read (about half) I enjoyed very much and I plan to find another copy and finish what I started.

The book posits that there are no unmeasurable or intangible quantities IF you can define the decision the quantity you are interested in is going to inform. There is always a way forward if this is the case. And after all, what's the point of measuring something if you're not going to use that measurement to inform a decision?

You already know much more than you think. Promotes a Bayesian way of thinking when it comes to decision making under uncertainty.
Profile Image for Kevin.
691 reviews10 followers
April 21, 2020
Tedious to read. Unless you are wanting a statistics course. I was looking for the theory, not the equations. I don't think the entirety of the book was worth the few nuggets I pulled out.

The cliff notes amount to: measurement is about uncertainty reduction, not necessarily uncertainty elimination. Don't forego trying to measure something just because you know it won't be a perfect measurement. Is it a better measurement than what you're currently using? Will it be valuable in making a decision? How much is on the line in that decision?

There was another chestnut he had about the animosity towards statistics: When people say that you can prove anything with statistics, they probably don’t really mean statistics. They just mean broadly the use of numbers, especially for some reason percentages. And they really don’t mean "anything" or "prove". What they really mean is that numbers can be used to confuse people. Especially the gullible ones lacking basic skills with numbers.
Profile Image for Jason.
Author 2 books8 followers
November 20, 2019
A dense, hard to read book, but so worth it

It’s been awhile since I read (and finished) a book so dense and complicated. It was worth it though as it changed so much of how I think about everything. From work and estimation with prioritization to all the data that is around us everyday. So. Very good.
Profile Image for Z. Aroosha Dehghan.
302 reviews47 followers
November 23, 2021
نمیگم بد بود ولی انتظارم رو برآورده نکرد. کاملا متوسط بود. سه ستاره شاید کمی براش زیاد باشه
خیلی از مطالبی که تو کتاب گفته شده صرفا تکراره. یک مطلب رو چندین بار گفتن اصلا کتاب رو جذاب نمیکنه. به من این حس رو میده که نویسنده به شعور یا حافظه م اعتماد نداره.
بیشتر کتاب رو میشه روزنامه وار خواند و چیزی از دست نداد اما موضوعش جالبه. کاش نگارش بهتری داشت.
Displaying 1 - 30 of 249 reviews

Can't find what you're looking for?

Get help and learn more about the design.