Wednesday, September 30, 2015

Frankfurt, On Inequality

Harry G. Frankfurt, professor emeritus of philosophy at Princeton University, is probably best known for his book On Bullshit. In his new book, a revised and expanded version of two journal articles published decades ago, he takes on those who find economic inequality morally objectionable. On Inequality (Princeton University Press, 2015) may not be the last word in the debate over income distribution, but it should sharpen the terms of the debate.

Eliminating income inequality, Frankfurt argues, cannot be a fundamental goal because “inequality of incomes might be decisively eliminated … just by arranging that all incomes be equally below the poverty line. Needless to say, that way of achieving equality of incomes—by making everyone equally poor—has very little to be said for it.” (p. 3) Instead, we should focus on reducing both poverty and excessive affluence. “That may very well entail, of course, a reduction of inequality. But the reduction of inequality cannot itself be our most essential ambition.” (p. 5)

It is not morally important that everybody have the same. “What is morally important is that each should have enough. If everyone had enough money, it would be of no special or deliberate concern whether some people had more money than others.” (p. 7) That is, egalitarianism is not morally significant; sufficiency is.

When we are morally disturbed by the circumstances of the very poor, we are not upset that they have less money than others but that they have too little. “What directly moves us in cases of that kind … is not a relative quantitative discrepancy but an absolute qualitative deficiency.” (pp. 41-42)

“The fundamental error of economic egalitarianism lies in supposing that it is morally important whether one person has less than another, regardless of how much either of them has and regardless also of how much utility each derives from what he has. This error is due in part to the mistaken assumption that someone who has a smaller income has more important unsatisfied needs than someone who is better off. Whether one person has a larger income than another is, however, an entirely extrinsic matter. It has to do with a relationship between the incomes of the two people. It is independent both of the actual sizes of their respective incomes and, more importantly, of the amounts of satisfaction they are able to derive from them. The comparison implies nothing at all concerning whether either of the people being compared has any important unsatisfied needs.” (pp. 46-47)

Frankfurt replaces an easy to understand, though inherently flawed concept—equality—with a much thornier one—sufficiency. A person who has a sufficient amount of money is content (or it would be reasonable for him to be content) with what he has. He has no active interest in getting more.

Frankfurt deflects some obvious criticisms of this notion of sufficiency, but in the final analysis I don’t think sufficiency can be the centerpiece of either a theoretical or a practical model of income distribution. It rests on the classic economic model of the rational agent, which has been more or less debunked by behavioral economics. It assumes a state of mind (contentedness), impossible to quantify and even perhaps to know, as the touchstone of a moral economic society. And it flies in the face of reality. Does Warren Buffett, who certainly has an active interest in getting more money, have an insufficient amount of money? Does the retail clerk who is not actively searching for a way to make more money thereby have a sufficient amount of money? Frankfurt’s refocus on sufficiency, and thereby contentedness, reminds me somewhat of the attempt to use gross national happiness rather than gross domestic product as the measure of prosperity.

On Inequality may not solve the kinds of problems that liberal politicians in particular rail against, but it makes an important contribution by challenging the way these problems are formulated. It’s a worthwhile, stimulating read.

Sunday, September 27, 2015

Tetlock & Gardner, Superforecasting

We all crave knowledge of the future. Is it going to rain this weekend? Where will the equity markets be in a year? Next week? Five minutes from now? Predictive models, many using big data and statistical algorithms, have begun making inroads into this problem. But IBM Watson’s chief engineer, David Ferrucci, doesn’t think that machines will ever completely replace subjective human judgment. In forecasting, combinations of machines and experts may prove more robust than pure-machine or pure-human approaches. “So,” say the authors of Superforecasting, “it’s time we got serious about both.” (p. 24)

Philip E. Tetlock, a professor at the University of Pennsylvania and co-leader of a multiyear online forecasting study, the Good Judgment Project, and Dan Gardner, a journalist, teamed up to produce one of the best books I’ve read this year. Superforecasting: The Art and Science of Prediction (Crown Publishers, 2015) argues that “it is possible to see into the future, at least in some situations and to some extent, and that any intelligent, open-minded, and hardworking person can cultivate the requisite skills. … Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs.” (pp. 9, 20)

It’s pretty easy to get started learning to forecast more accurately. A tutorial for the Good Judgment Project covering some of the basic concepts in this book and summarized in its Ten (actually eleven) Commandments appendix “took only about sixty minutes to read and yet it improved accuracy by roughly 10% through the entire tournament year. …And never forget that even modest improvements in foresight maintained over time add up. I spoke about that with Aaron Brown, an author, a Wall Street veteran, and the chief risk manager at AQR Capital Management, a hedge fund with over $100 billion in assets. ‘It’s so hard to see because it’s not dramatic,’ he said, but if it is sustained, ‘it’s the difference between a consistent winner who’s making a living, or the guy who’s going broke all the time.’” (p. 20) Did that get your attention?

Admittedly, Superforecasting doesn’t focus on the financial markets because the authors recognize that they are rife with aleatory uncertainty (the unknowable), not just epistemic uncertainty (the unknown but potentially knowable). “Aleatory uncertainty ensures life will always have surprises, regardless of how carefully we plan. Superforecasters grasp this deep truth better than most. When they sense that a question is loaded with irreducible uncertainty—say, a currency-market question—they have learned to be cautious, keeping their initial estimates inside the shades-of-maybe zone between 35% and 65% and moving out tentatively.” (p. 116) Note that, even here, superforecasters don’t just throw up their hands and say 50-50.

In a second reference to the markets, the authors compare superforecasting investing to black swan investing. Playing the low probability, high reward card is not the only way to invest. “A very different way is to beat competitors by forecasting more accurately—for example, correctly deciding that there is a 68% chance of something happening when others foresee only a 60% chance. … It pays off more often, but the returns are more modest, and fortunes are amassed slowly.” (p. 195)

At its core, Superforecasting teaches its readers how to think probabilistically, something that doesn’t come naturally to most people. We tend to use a two- or three-setting mental dial. Something will happen, won’t happen, or may happen. But this way of thinking gets us into trouble. The “will” and “won’t” settings reflect a faulty view that reality is fixed. Even death and taxes may not be certain someday. And the “maybe” setting “has to be subdivided into degrees of probability. … The finer grained the better, as long as the granularity captures real distinctions—meaning that if outcomes you say have an 11% chance of happening really do occur 1% less often than 12% outcomes and 1% more than 10% outcomes.” (p. 117)

What does it take to be a superforecaster? Well, for starters, a lot of time and mental energy. Those who have a superabundance of both can join the thousands of people predicting global events at the Good Judgment Project. The rest of us can use this book to improve our own, most likely more modest predictions. If, that is, we have, or are willing to cultivate, certain qualities. Superforecasters are foxes, not hedgehogs. They look at problems from multiple perspectives. They tend to be, among other things, cautious, humble, nondeterministic, actively open-minded, intellectually curious, reflective, numerate, pragmatic, and analytical, with a growth mindset and grit. “The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.” (p. 155)

Superforecasting is a must-read book for everyone who is sick to death of “the guru model that makes so many policy debates so puerile: ‘I’ll counter your Paul Krugman polemic with my Niall Ferguson counterpolemic, and rebut your Tom Friedman op-ed with my Bret Stephens blog.’” (p. 24) As the authors write, “All too often, forecasting in the twenty-first century looks too much like nineteenth-century medicine. There are theories, assertions, and arguments. There are famous figures, as confident as they are well compensated. But there is little experimentation, or anything that could be called science, so we know much less than most people realize. And we pay the price. Although bad forecasting rarely leads as obviously to harm as does bad medicine, it steers us subtly toward bad decisions and all that flows from them—including monetary losses, missed opportunities, unnecessary suffering, even war and death.” (p. 42) It’s time for a change—for all of us to change.

Thursday, September 24, 2015

Weightman, Eureka

“All modern inventions have an ancient history.” Thus begins Gavin Weightman’s Eureka: How Inventions Happen (Yale University Press, 2015). But, he adds, “what is striking” is that “the inventor who makes the breakthrough is invariably outside the mainstream of existing industry and technology.”

Weightman focuses on five familiar technologies: the airplane, television, bar code, personal computer, and cell phone.

Some of these stories are better known than others. Most people know a great deal, for instance, about the Wright brothers, and if you want to know even more, you now have David McCullough’s best-selling (though, to me, disappointing) biography.

But how many people know about the birth of the bar code? Joe Woodland isn’t exactly a household name, and his 1949 solution to the problem of a distraught supermarket manager didn’t exactly fly off the shelves. Without scanner technology and microcomputers the bar code was just a pipedream. Moreover, it had to be approved by a committee, the Symbol Selection Committee, made up of representatives of major supermarket chains and grocery manufacturers. Not until July 1974 was the first true UPC scanned in a supermarket—on a ten-pack of Wrigley’s gum.

Weightman’s book is a journey through the history of invention, some obvious precursors of the technologies on which we rely today, others more surprising steps along the way. To take but a single example: Alois Senefelder’s invention of lithography (one of those “mother of necessity” inventions because he needed a way to print his plays and didn’t have the money to buy presses and type). “And,” Weightman continues, “Senefelder’s discovery did more than revolutionise the art of printing: it inspired the creation of an entirely new way of copying images which in its early days went by the name of heliography” and, later, the Daguerrotype. Fast forward, we arrive at the technique of using photography to print circuits.

Eureka is of necessity a series of tangled stories. It isn’t guided by any overarching hypothesis about the history of science and technology (except that one thing leads to another), so the stories aren’t designed to illustrate a point. That makes them all the more enjoyable.

Wednesday, September 23, 2015

Colvin, Humans Are Underrated

If you want to make it in our increasingly computerized world, you’d better learn to play well with others. This is the grossly simplified thesis of Geoff Colvin’s new book. (Colvin is a senior editor at large for Fortune, but you probably best know him for his Talent Is Overrated, which touted deliberate practice). In Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will (Portfolio / Penguin, 2015) Colvin asks how we human beings can carve out a meaningful work space for ourselves when computers do so many things—and will increasingly do even more things—better than we can.

He argues that we can be great performers simply by being human, where being human means being social. “We are hardwired to connect social interaction with survival. No connection can be more powerful.” (p. 38) “Social interaction is what our brains are for.” (p. 39)

Computers may take over an increasing number of tasks that human beings used to perform, but, Colvin argues, there’s a limit to what we will accept computers doing. The question therefore is not what computers will never be able to do, a perilous line of inquiry, but what activities “we humans, driven by our deepest nature or by the realities of daily life, will simply insist be performed by other humans, regardless of what computers can do.” (p. 42)

He suggests that all important decisions will remain in the hands of human beings because “it’s a matter of social necessity that individuals be accountable for important decisions.” (p. 43) We’ll also perform the sorts of tasks that we haven’t clearly articulated and so aren’t amenable to computer analysis, goals and strategies that people must work out for themselves and that are best developed in groups. And then there are the tasks that “our most essential human nature demands” be performed by human beings—a doctor giving us a diagnosis, for instance, even if a computer supplied it.

The demand for cognitive skill in the workplace peaked in about the year 2000. The jobs that college graduates have been getting since that time require less brain work—“thus the widely noted upsurge in file clerks and receptionists with bachelor’s degrees.” (p. 47)

Cognitive skills are taking a back seat to social relationship skills. For instance, the work of lawyers is increasingly being taken over by infotech. Smart lawyers can still do well, “but not just because they’re smart. The key to differentiation lies entirely in the most deeply human realms of social interaction: understanding an irrational client, forming the emotional bonds needed to persuade that client to act rationally, rendering the sensing, feeling judgments that clients insist on getting from a human being.” (p. 48)

Beleaguered humanities majors—and women—may get a boost in the new economy. “Skills that employers badly want—critical thinking, clear communicating, complex problem solving—‘are skills taught at the highest levels in the humanities.’” (p. 178) And “the traits, tendencies, and abilities for which women have long shown greater strength than men will prove highly valuable for people of either sex who possess them.” (p. 164)

I would like to say that I was assuaged by Colvin’s book. But I keep thinking of instances of personal interaction that we once took for granted and that are now distant memories, retail clerks being a prime example. As technology advances, people adapt. In time we don’t miss having a human being on the other side of a transaction.

Moreover, Colvin’s world of social/economic relationships doesn’t create new jobs to replace the ones lost to technology. It simply, as far as I can ascertain, draws a line in the sand across which we dare (or don’t dare) technology to cross. I would hate to have to defend that line.

Sunday, September 20, 2015

Teitelbaum, The Most Dangerous Trade

Of all the ways to make money in the financial markets, being a short seller is one of the toughest. The short seller is fighting the upward bias of the equity markets as well as the wrath of deep-pocketed, litigious individuals with vested interests in the stocks he is targeting. He has to be both a sleuth and a promoter; after all, what good is all his detective work if other investors don’t know what he uncovered and don’t join him in putting downward pressure on the stock?

In The Most Dangerous Trade: How Short Sellers Uncover Fraud, Keep Markets Honest, and Make and Lose Billions (Wiley, 2015), Richard Teitelbaum, a financial journalist, has written illuminating profiles of ten top short sellers, complete with their investing strategies. Combining interviews with well-researched back stories, he explores the highs and lows (and there are a lot of lows) of short selling.

Bill Ackman, Manuel Asensio, Jim Chanos, David Einhorn, Carson Block, Bill Fleckenstein, Doug Kass, David Tice, Paolo Pellegrini, and Marc Cohodes are the featured investors. We learn about their early years, how they ended up being short sellers, even the significance of their fund names. Why Muddy Waters, for instance? Block, trying to find a good name for his nascent firm, recalled a Chinese proverb: “Muddy waters make it easy to catch fish.”

We read about positions that worked and those that didn’t—and what these investors learned from the latter. We learn how they construct their portfolios (including long positions) and how they try to mitigate risk (sometimes with options).

Each short seller has his own style, but the investors profiled in this book share some common traits. They are passionate, they work exceedingly hard, and they are resilient—even those who ultimately didn’t make it. They scour the equity markets looking for stocks whose price significantly overstates their value. Some have macro theses, some are more akin to microbe hunters. But they are all looking for stocks that should, if they are correct and if other investors embrace their research, fall. Even in a rising market, though that is sometimes too much to hope for.

The Most Dangerous Trade is a book that’s hard to put down. Teitelbaum knows how to keep his reader involved. Whether you just like a good story or are thinking about starting a hedge fund, whether you are an individual investor who wants to learn how to pick stocks or an institutional investor debating portfolio construction, Teitelbaum’s book will speak to you. If you don’t come away with at least one or two good ideas, you didn’t read it carefully enough.

Saturday, September 19, 2015

Fuld, Stock Market Trivia

Do you live and breathe the financial markets? Are you a trivia nut? Do you just want to have an hour or so of fun? If so, I can recommend Stock Market Trivia (2013) by Fred Fuld III. The author sent me a copy, and I had an enjoyable time flipping through it. “Flipping through” because, alas, my brain was already cluttered with a lot of market trivia. And with many of the weird words of Wall Street, defined in the second part of the book.

Here are a couple of examples of what you can find in this book. The smallest stock exchange in the world, measured by the number of stocks traded, is the Douala Stock Exchange in Cameroon, with only three listings at the moment.

One pound equals $490. No, of course, this is not an exchange rate. Rather, if you stack 490 dollar bills on a scale, they will weigh one pound.

From the two pages of “funny mergers”: “If the following companies were to merge: Caterpillar, Gottschalks, Uranium Energy, Tongjitang Chinese Medicines, you would end up with Cat Got Ur Tong.”

And can you believe that any company would have the ticker symbol SCAM? It is usually published as SCAM.L since it trades on the London Stock Exchange and is a highly regarded British mutual fund.

Friday, September 18, 2015

Javaheri, Inside Volatility Filtering, 2d ed.

Inside Volatility Filtering: Secrets of the Skew by Alireza Javaheri, the head of Equities Quantitative Research Americas at JP Morgan, is a book for quants. The first edition, which appeared ten years ago, was based on his Ph.D. dissertation and won the Wilmott Award. In this revised, updated second edition (Wiley, 2015), Javaheri draws on feedback he received at conferences and in the courses he taught at NYU’s Courant Institute of Mathematical Sciences and at Baruch College.

My mathematical skills, though ever improving, are not yet up to the task of writing a meaningful review of this book. So consider this a notice rather than a review.

Here’s a description of the book’s contents from the jacket copy: “Inside Volatility Filtering, Second Edition presents a new approach to volatility estimation identifying financial econometrics based on a more accurate estimation of the hidden state. Based on the idea of ‘filtering,’ this practical guide lays out a two-step framework involving a Chapman-Kolmogorov prior distribution followed by Bayesian posterior distribution to develop a robust estimation based on all available information. This new edition gives you an edge by showing you how to: base volatility estimations on more accurate data, integrate past observation with Bayesian probability, exploit posterior distribution of the hidden state for optimal estimation, and boost trade profitability by identifying ‘skewness’ opportunities.”

Wednesday, September 16, 2015

Farley, Wall Street Wars

In Wall Street Wars: The Epic Battles with Washington That Created the Modern Financial System (Regan Arts, 2015) Richard E. Farley takes us back to the 1930s—to the Emergency Banking Act, the Glass-Steagall Banking Act of 1933, the Securities Act of 1933, the Securities Exchange Act of 1934, and the creation of the Securities and Exchange Commission. A lively account, the book adds flesh and bones to the politicians responsible for ground-changing financial legislation.

In the early years after the Crash and Depression, the public blamed the country’s political leaders “(Hoover in particular) and renegade Wall Street pool operators and short sellers. Remarkably, the nation’s banking establishment had successfully avoided the worst of the public’s wrath.” (p. 37) But Ferdinand Pecora, the fifth lawyer in less than a year to fill the position of chief counsel to a subcommittee of the Senate Banking and Currency Committee investigating stock market practices, changed all that. Pecora questioned Charlie Mitchell, chairman of the board of directors of National City Bank, the predecessor of Citigroup, and showed both that he was a monumental tax evader and that he had duped the investors in National City. After several days of questioning other National City executives and exposing their gross misconduct, Pecora became a celebrity, “the face of justice for the average man” against “the malefactors of Wall Street.” (p. 52)

Just exposing and punishing Wall Street “banksters” was not, of course, enough to get the financial system on firmer footing. Congress needed to draft sweeping legislation. Carter Glass, senator from Virginia, was the man to get the job done. He was, “to put it mildly, a difficult man. He was ill-tempered, racist, and often in poor health, physically and mentally, suffering frequent nervous breakdowns and hospitalizations. … He had never held a job in a private sector financial institution, and what limited formal education he had ended when he was fourteen. He is also the single most important lawmaker in the history of American finance. He drafted and shepherded through Congress the legislation creating the Federal Reserve System and later served as President Wilson’s secretary of the Treasury.” (p. 31)

Glass was an advocate of large banks with many branches. These banks might behave badly, “but they were smart and they were solvent. … Glass believed that too many banks that were ‘too small to save’ were a far greater risk than banks that were ‘too big to fail.’” (p. 79)

Here I’ve given but a tiny glimpse into the thinking behind, and the wrangling over, the legislation that shaped and, in some cases still shapes, our financial system. Farley’s account is illuminating and, as such, valuable reading for anyone who cares about how we got to where we are today.

Sunday, September 13, 2015

Nisbett, Mindware

The order in which you ask questions can make all the difference in the answers you receive. Similarly, I would suggest, though presumably not for the same reasons, the order in which you read books can make a huge difference in your opinion of them. I read Richard E. Nisbett’s Mindware: Tools for Smart Thinking (Farrar, Straus and Giroux, 2015) after reading probably far too many somewhat similar books. As a result, Mindware felt a tad tired.

If, however, this is your first, second, or even third foray into statistical thinking, decision analysis, and/or behavioral economics, you may well find Mindware revelatory. Written for the layman, the book moves along at a good clip and deals with things we care about—or should care about.

Nisbett, a professor of psychology at the University of Michigan, demonstrates the high cost of not constructing proper experiments to test the effectiveness of interventions. “D.A.R.E. programs don’t produce less teen drug or alcohol use, Scared Straight programs result in more crime, not less, and grief counselors may be in the business of increasing grief rather than reducing it.”

He exposes the flaws inherent in multiple regression analysis, using such examples as class size, food supplements, and the long-term unemployed.

He tells stories to advance his arguments. For instance, the one about Eric Schmidt interviewing Barack Obama, shortly after he announced that he was running for president in the fall of 2007, in front of a large audience of Google employees. “As a joke, Schmidt’s first question was, ‘What is the most efficient way to sort a million 32-bit integers?’ Before Schmidt could ask a real question, Obama interrupted: ‘Well, I think the bubble sort would be the wrong way to go,’ a response that was in fact correct. … In the audience that day was a product manager … who made a decision on the spot to go to work for Obama. ‘He had me at bubble sort.’”

Here are a couple of takeaways from the book that may be of particular interest to traders and investors.

First: the detection of real patterns, as opposed to all the patterns we see even when they aren’t there. “The unconscious mind can actually be superior to the conscious mind in learning highly complex patterns. More than that, in fact: it can learn things that the conscious mind can’t. … We know that people can learn them because (1) participants became faster over time at pressing the correct button and (2) when the rules suddenly changed, their performance deteriorated badly. But the conscious mind was not let in on what was happening. Participants didn’t even consciously recognize that there was a pattern, let alone know exactly what it was.”

“Laborious calculation is … not involved in complex pattern detection. … Your nervous system is an exquisitely designed pattern detector. But the process by which it sees patterns is completely opaque to us.”

And second: Popper and poppycock, a personal favorite. Popper, you recall, maintained that induction is unreliable and that hypotheses can only be disconfirmed, not confirmed. Nisbett retorts: “Though correct, Popper’s contention is pragmatically useless. We have to act in the world, and falsification is only a small part of the process of generating knowledge to guide our actions. Science advances mostly via induction from facts that support a theory. … The glittering prizes in science don’t go to the people who falsified someone else’s theory… Rather, the laurels are for scientists who have made predictions based on some novel theory and demonstrated that there are important facts that support the theory and are difficult to explain in the absence of the theory. Scientists [and certain well-known investors/traders] are much more likely to think they accept Popper’s anti-inductive stance than philosophers of science are to endorse it. The ones I know think it’s utterly wrong.”

Saturday, September 12, 2015

McLean, Shaky Ground

At the request of the publisher, I removed my earlier review and am reposting it to coincide with the launch date of the book.


Columbia University Press has launched a new publishing imprint, Columbia Global Reports, to produce “six short, ambitious works of journalism and analysis a year, each on a different under-reported story in the world.” Bethany McLean, co-author of The Smartest Guys in the Room and All the Devils are Here, has written the first title, Shaky Ground: The Strange Saga of the U.S. Mortgage Giants. It's an auspicious beginning for the series.

Mervyn King, the former governor of the Bank of England, told the author: “Most countries have socialized health care and a free market for mortgages. You in the United States do exactly the opposite.” (p. 9) And, he didn’t add but I will, we do both badly.

In this well-crafted if depressing 160-page book McLean recounts the origins of the housing crisis, its temporary fix, and the aftermath, which she describes as “limbo.”

Fannie Mae and Freddie Mac have been in government conservatorship since the fall of 2008, at which point they had a combined $5.3 trillion in outstanding debt. If this figure had been put on the government’s balance sheet, the public national debt would have increased by about 50%. Partly for this reason, some of their common and preferred stock remained in private hands.

Hedge funds made big bets in the darkest days of Fannie and Freddie that these two companies would become profitable again and that they would make a fortune on stock they had bought for next to nothing. They were right on the first count. The companies have paid $231 billion back to the U.S. Treasury, over $40 billion more than they got from taxpayers. But they were wrong on the second count since, in 2012, the government “changed the terms of the bailout and is now directing almost all their profits toward reduction of the federal deficit.” (p. 19) The hedge funds cried foul, filing some 20 lawsuits, alleging that the bailout’s third amendment, the one in 2012, “violates the Constitution’s Fifth Amendment: The government cannot confiscate private property without paying for it. … [T]his isn’t about the government’s actions in a time of crisis, but rather about the government’s actions after the crisis had passed. … A joke goes: ‘What’s the difference between the GSEs in the United States and Repsol in Argentina?’ The punchline: ‘Argentina settled.’” (p. 125)

The investors have been losing their battle in the courts, with one judge dismissing a suit because of a provision in the Housing Economic Recovery Act that reads: “No court may take any action to restrain or affect the exercise of powers or functions of the Director as a conservator or a receiver.” (p. 128) But they haven’t given up on the GSEs. Bill Ackman bought about 13% of the remaining 20% of the GSEs’ common stock in the spring of 2014, and in February of this year Bruce Berkowitz’s Fairholme Fund picked up nearly five million shares. They are betting that Fannie and Freddie will be revitalized.

The federal government has been trying, though not very hard, to kill off Fannie and Freddie ever since the housing crisis. At a conference in early 2015, a Treasury official said that the administration “’believes that private capital should be at the center of the housing finance system.’ And he reiterated that Fannie and Freddie had to die. ‘The critical flaws in the legacy system that allowed private shareholders and senior employees of the GSEs to reap substantial profits while leaving taxpayers to shoulder enormous losses cannot be fixed by a regulator or conservator because they are intrinsic to the GSEs’ congressional charters,’ he said.” (pp. 146-47)

But Fannie and Freddie soldier on, severely undercapitalized and on shaky ground. They seem destined to have long, if not necessarily happy, lives. And the cult of homeownership remains intact—at least in Washington, if not among millennials.

Wednesday, September 9, 2015

Ellenberg, How Not to Be Wrong

I came across Jordan Ellenberg’s book How Not to Be Wrong: The Power of Mathematical Thinking (Penguin, 2014) when I was looking for another title. It was a serendipitous find, one of the best books I’ve read this year.

The book’s ideal reader is numerate and politically liberal. Ellenberg’s examples often display an openly liberal bias, which would not go down well with the Fox News crowd. A case in point: Ellenberg’s explanation of why, contrary to Republican dogma, the Reagan tax cut resulted in less tax revenue, not more. Even if one believes in the power of the Laffer curve (which is overly simplistic in and of itself because, for instance, it ignores spending as a variable), the question is where we are on the curve. Assume the x-axis represents the tax rate, from 0% to 100%, and the y-axis revenue. The Laffer curve slopes up from 0%, peaks at some point, and then slopes down to 100%. If we’re to the right of the peak, a government that adopts Laffer-curve thinking should lower the tax rate to increase revenue; if we’re to the left, however, it should raise the tax rate. Most likely, Ellenberg suggests, we were already to the left of the Laffer peak when Reagan lowered taxes—and saw a significant decrease in revenue from personal income taxes.

Ellenberg uses another political example to show why you shouldn’t talk about percentages of numbers when you’re dealing with a combination of positive and negative numbers. In June of 2011 Wisconsin’s Republican Party issued a news release touting the job-creating record of its governor, Scott Walker. That month the U.S. economy had added only 18,000 jobs. Wisconsin, by contrast, added 9,500 jobs. “Today,” the statement read, “we learned that over 50% of U.S. job growth in June came from our state.” The problem with that claim is that Minnesota added 13,000 jobs (as Ellenberg writes, “70% of all jobs created—by now the arithmetical problem should be evident”), and four other states also outpaced Wisconsin’s job gains. Job losses in other states came close to balancing out job gains in states like Wisconsin and Minnesota.

Ellenberg, to my mind, is at his best when he makes tough mathematical concepts, such as a ten-dimensional vector, comprehensible. And he does just that when explaining the correlation between average January 2011 and January 2012 temperatures in ten California cities. The two vectors point in roughly the same direction. “The correlation between the two variables is determined by the angle between the two vectors.” (p. 277) When the angle is acute, the two variables are positively correlated; when it is obtuse, they are negative correlated; when the angle is a right angle, the vectors are orthogonal. (If you want to know the meaning of the word “orthogonal,” just ask Chief Justice John Roberts and Justice Antonin Scalia. Ellenberg includes an amusing exchange from a recent Supreme Court oral argument.)

Establishing the efficacy of drugs or other medical treatment is notoriously difficult, in part because correlation is not transitive. For instance, niacin increases HDL, and a higher HDL is associated with a lower risk of cardiovascular events. But patients who got niacin had just as many heart attacks and strokes as the rest of the population. That is, niacin is correlated with high HDL and high HDL is correlated with a low risk of heart attack, but niacin isn’t ipso facto correlated with a low risk of heart attack.

As books on mathematical (primarily statistical) thinking go, Ellenberg’s is a keeper. It’s informative, witty, and a page-turner. Who could ask for anything more?

Sunday, September 6, 2015

Akerlof & Shiller, Phishing for Phools

George A. Akerlof and Robert J. Shiller, who previously collaborated to produce Animal Spirits, have joined forces again. Their new book is Phishing for Phools: The Economics of Manipulation and Deception (Princeton University Press, 2015).

Their thesis is simple but powerful: that “competitive markets by their very nature spawn deception and trickery, as a result of the same profit motives that give us our prosperity.” (p. 165) Economies “have a phishing equilibrium in which every chance for profit more than the ordinary will be taken up.” (p. 2) Free-market equilibrium undermines our plans to eat healthily, it makes us pay too much for our cars and houses, it transforms rotten assets into gold.

We have weaknesses that can be exploited (monkeys on our shoulders), weaknesses that free markets by their very nature exploit. Akerlof and Shiller modestly claim to be making only “a small tweak to the usual economics (by noticing the difference between optimality in terms of our real tastes and optimality in terms of our monkey-on-the-shoulder tastes). But that small tweak for economics makes a great difference to our lives. It’s a major reason why just letting people be Free to Choose—which Milton and Rose Friedman, for example, consider the sine qua non of good public policy—leads to serious economic problems.” (p. 6)

In 1930 John Maynard Keynes projected what life would be like in 2030. In one respect he was pretty close: real income per capita in the U.S. was 5.6 times higher in 2010 than it was in 1930. (He predicted it would be eight times higher by 2030.) But in the other, he was dead wrong. People aren’t worrying about how to use their surfeit of leisure; they’re still worrying about how to pay the bills. “[F]ree markets have … invented many more ‘needs’ for us, and, also, new ways to sell us on those ‘needs.’ All these enticements explain why it is so hard for consumers to make ends meet. … Some say that our predicament is a product of the consumerism of the modern world. … But to our minds, the central problem lies in the equilibrium. The free-market equilibrium generates a supply of phishes for any human weakness. Our real per capita GDP can go up five-and-a-half-fold again, and then do it again; we will still be in the same predicament.” (pp. 21-22)

Akerlof and Shiller devote the bulk of their book to providing examples of phishing. They explain how reputation mining contributed to the financial crisis, why the buyers of the rotten mortgage-backed securities were so gullible, and why the financial system was so vulnerable to the discovery that the securities were rotten. They illustrate how advertisers graft stories of their own onto the mental narratives in our minds. They analyze a study showing that blacks and women are charged more for cars—black men a staggering 9% more. This even when, in the study, the testers were chosen to be as similar as possible in age and education, when “they drove similar rental cars to the dealers; wore similar ‘yuppie’ clothes; indicated no need for financing; and gave the same home address.” (p. 61) The authors describe how credit cards entice us to spend a great deal more than we would if we paid with cash. They give examples from the worlds of pharma, food, and lobbying and explore the S&L crisis and junk bonds.

As the campaign season kicks into gear, it’s perhaps timely to look at one theme in their critique of the Citizens United decision. They write: “Our view of free speech closely mirrors our view of free markets. We view both as critical for economic prosperity; and free speech as especially critical for democracy. But just as phishing for phools yields a downside to free markets, similarly, it yields a downside to free speech. Like markets, free speech also requires rules to filter the functional from the dysfunctional.” (p. 160) The majority opinion, written by Justice Kennedy, “seems to treat speaking solely as conveyance of information, without consideration of its role of persuasion, inevitably with its phish for fools. … Speech is also a way to convince other people to act in our interests.” (p. 161)

Phishing for Phools forswears technical language, making this book accessible not only to economists but to consumers and policymakers. It should make everyone rethink the unfettered free-market model.

Wednesday, September 2, 2015

Gray et al., DIY Financial Advisor

Models beat experts—or, stated more cautiously, models typically beat experts. This is the rallying cry of DIY Financial Advisor: A Simple Solution to Build and Protect Your Wealth (Wiley, 2015) by Wesley R. Gray, Jack R. Vogel, and David P. Foulke, all managing members of Alpha Architect. Whether or not you believe this claim—and despite the seeming preponderance of evidence in its favor there are still a lot of holdouts (take, for instance, the argument of Jon Faust at Jackson Hole against rule-based monetary policy), models have obvious advantages over experts. For one thing, they don’t have cognitive biases that undermine them.

DIY Financial Advisor continues in the fine tradition of Wesley R. Gray’s earlier book, co-authored with Tobias E. Carlisle,
Quantitative Value. (Their new book, Quantitative Momentum, is scheduled to be published in January.) After an extensive review of the literature and the team’s own backtesting, the authors offer up simple, easily implemented investing strategies. For geeks, they provide brief summaries of research papers that point to potentially fruitful areas for further study.

They describe simple models that work--an asset allocation model, a risk management model, and security selection models.

The asset allocation model is, if I remember correctly, the one that Harry Markowitz himself uses—the old-fashioned 1/N, equal-weight portfolio. The authors include tables showing how it performed against much more sophisticated models. The upshot: it came out the overall winner.

The risk management model, which they have dubbed ROBUST, combines a simple moving average strategy with a time series momentum strategy. As the book shows, it performed well across five asset classes—U.S. stocks, international stocks, real estate, commodities, and bonds.

As for stock selection, they describe a value model, a momentum model, and a model that combines these two drivers of stock performance.

The result is a book that every retail investor should read—and that the purveyors of complex strategies should use as a benchmark against which to test their products’ alleged outperformance. It’s clearly written, quantitatively supported, and amply documented. A model of a good investing book.