Essays In Microfoundations Of Macroeconomics Examples

On By In 1

What’s at stake:The role of aggregate or ad-hoc models for policy discussions in an age where journal papers in macro theory are always microfounded DSGE was brought to the forefront more than 2 years ago by Paul Krugman’s provocative essay `How Did Economists Get It So Wrong?’ (see here for a review). Since then, an interesting discussion – in the sense that it is not a discussion between those who do not understand the language of modern macroeconomics and those who do – has been going in the blogosphere on the importance of microfoundations for macroeconomic analysis. In a previous post,we outlined recent extensions of the basic IS-LM framework and pointed to a specific strand (modeling financial frictions into New Keynesian models) of a burgeoning literature: models with heterogeneous agents. We use the flurry of debate this week on the blogosphere to provide more background on these heterogeneous agents models and on other alternative approaches to the representative agent framework (behavioral macro models and agent based models).

By: Jérémie Cohen-SettonDate: March 9, 2012Topic: Innovation & Competition Policy

What’s at stake:The role of aggregate or ad-hoc models for policy discussions in an age where journal papers in macro theory are always microfounded DSGE was brought to the forefront more than 2 years ago by Paul Krugman’s provocative essay `How Did Economists Get It So Wrong?’ (see here for a review). Since then, an interesting discussion – in the sense that it is not a discussion between those who do not understand the language of modern macroeconomics and those who do – has been going in the blogosphere on the importance of microfoundations for macroeconomic analysis. In a previous post, we outlined recent extensions of the basic IS-LM framework and pointed to a specific strand (modeling financial frictions into New Keynesian models) of a burgeoning literature: models with heterogeneous agents. We use the flurry of debate this week on the blogosphere to provide more background on these heterogeneous agents models and on other alternative approaches to the representative agent framework (behavioral macro models and agent based models).

Microfounded and some other useful aggregate models

Mark Thoma points out that the reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The New Keynesians model was built to capture "ordinary" business cycles driven by price sluggishness of the sort that can be captured by the Calvo model of price rigidity. But the standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them.

Simon Wren-Lewis argues that some aggregate models contain critical features that can be derived from a number of different microfoundations. In that situation, it is natural to want to work with these aggregate models. We could even say that they are more useful, because they have a generality that would be missing if we focused on one particular microfoundation. Suppose there is not just one, but perhaps a variety of particular worlds which would lead to this set of aggregate macro relationships. Furthermore, suppose that more than one of these particular worlds was a reasonable representation of reality. In these circumstances, it would seem sensible to go straight to the aggregate model, and ignore microfoundations

In a follow-up post Simon Wren-Lewis argues that the microfoundations purist view is a mistake because it confuses ‘currently has no clear microfoundations’ with ‘cannot ever be microfounded’. Developing new microfounded macro models is hard. It is hard because these models need to be internally consistent. If we think that, say, consumption in the real world shows more inertia than in the baseline intertemporal model, we cannot just add some lags into the aggregate consumption function. Instead we need to think about what microeconomic phenomena might generate that inertia. We need to rework all relevant optimization problems adding in this new ingredient. Many other aggregate relationships besides the consumption function could change as a result. When we do this, we might find that although our new idea does the trick for consumption, it leads to implausible behavior elsewhere, and so we need to go back to the drawing board. It is very important to do all this, but it takes time. So using aggregate (or useful, or ad hoc) models should be respected if there is empirical evidence supporting the ad hoc aggregate relationship, and if the implications of that relationship could be important. In these circumstances, it would be a mistake for academic analysis to have to wait for the microfoundations work to be done.

The Lucas Critique and the Representative Agent framework

Noahpinion points that the Phillips Curve is the famous example of why aggregate relationships might not be useful without understanding the microfoundations. That doesn’t make aggregate-only models useless, but it should make people cautious about using them. The usual answer is that "microfoundations make models immune to the Lucas Critique." The idea is that the rules of individual behavior don’t change when policy changes, so basing our models purely on the rules of individual behavior will allow us to predict the effects of government policies. Actually, it’s not clear this really works. For example, most microfounded models rely on utility functions with constant parameters – these are the "tastes" that Bob Lucas and other founders of modern macro believed to be fundamental and unchanging. But I’d be willing to bet that different macro policies can change people’s risk aversion. If that’s the case, then using microfoundations doesn’t really answer the Lucas Critique.

The use of representative agents in macroeconomics has something to do with the recent soul searching among macroeconomists and the critique against the profession.

In a review of Michael Woodford’s major and very influential monetary theory textbook (Interest and Prices), Kevin Hoover argued that if Keynesians were stigmatized for dealing only in aggregates, the representative-agent is nothing else but an aggregate in microeconomic drag. He recalls that most important – but widely neglected – results of general equilibrium theory in the 1950s and 1960s shown that the representative-agent’s utility function cannot be thought of as ranking the outcomes of policy in a manner that deeply reflects those of individual agents.

AlphaSources provides some useful background about the origins of the Representative Agent. It first appeared in the context of Alfred Marshall’s Principles of Economics in the form of the representative firm and Marshall originally conjured this entity in the context of constructing a supply curve for the industry. After a devastating critique by, among others, John Maynard Keynes and Lionel Robbins the idea of the representative agent was put to rest in the first part of the 20th century. According to Hartley (1996) the first use of representative agents in a post Marshall perspective has its origins in the period in which neo-classical economics was reaching its zenith. Concretely, Lucas and Rapping (1970) is cited as the first contribution using a representative agent detailing the theory of intertemporal labor supply which is a core assumption of most real business cycle models (see D. Romer, 2006, ch. 4).

Microfoundations or "Microfoundations"

Paul Krugman argues that when making comparisons between economics and physical science we should keep in mind that what we call “microfoundations” are not like physical laws. Heck, they’re not even true. Maximizing consumers are just a metaphor, possibly useful in making sense of behavior, but possibly not. The metaphors we use for microfoundations have no claim to be regarded as representing a higher order of truth than the ad hoc aggregate metaphors we use in IS-LM or whatever. Noahpinion argues that macroeconomists to have basically done one of two things: either A) gone right on using aggregate models, while writing down some "microfoundations" to please journal editors, or B) drawn policy recommendations directly from incorrect models of individual behavior.

Kevin Grier, professor at the University of Oklahoma, points that we don’t even have very good micro foundations for money! We just put it in the utility function or arbitrarily assume a "cash in advance" constraint. Amazingly though, Central Banks in the Western world have spent a lot of money and economist-hours trying to construct DSGE models that are actually useful for forecasting. This effort has largely led to the de facto abandonment of micro-foundations. In the quest to make the models "work" we often either choose whatever micro-foundation that gives the best forecast regardless of micro evidence about whether or not it is accurate, or we just add ad hoc, non-micro-founded "frictions" to create more inertia. Or we just add more and more "shocks" to the model and say things like, "much of the variation in X is caused by shocks to the markup".

Robert Waldmann points that there is a tension between the two pillars of Modern Macroeconomic Methodology: Milton Friedman’s methodology of positive economics and the Lucas critique. A nickel version of Friedman’s methodology of positive economics start with models can be useful even if they are not true (even if they are false by definition). This is universally agreed. This implies that we shouldn’t treat models as hypotheses to be tested, so we are not necessarily interested in every testable implication of the model. Instead we should care about the implications of the model for the variables, which matter to us.

Beyond the RA framework: Models with heterogeneous agents

Douglas Clement reviews for the Minneapolis Fed a flourishing literature where researchers explore the promise of economic models that allow for human variation. Including different tastes or characteristics in models have led to a reformulation of many previous results derived with simple representative agents. While standard models tend to find small impact of recessions, heterogeneous agents highlight distributive effects and thus point out much larger overall effects on unemployment and wealth. The consequences of inflation have also be revised: expected inflation has a negative impact on the poor because they hold more of their wealth in cash than do the rich, but, on the other hand, it creates large losses for older, wealthy households because they hold more bonds than others. Deflation could have the opposite consequences. 

Simon Wren-Lewis points to a couple interesting questions raised by the role of microfoundations in macroeconomics: Can the microfoundations approach embrace all kinds of heterogeneity, or will such models lose their attractiveness in their complexity? Does sticking with simple, representative agent macro impart some kind of bias? Does a microfoundations approach discourage investigation of the more ‘difficult’ but more important issues? Might both these questions suggest a link between too simple a micro based view and a failure to understand what was going on before the financial crash? Are alternatives to microfoundations modeling methodologically coherent? Is empirical evidence ever going to be strong and clear enough to trump internal consistency?

Jonathan Heathcote, Kjetil Storesletten and Gianluca Violante have, for example, applied these heterogeneous models to investigate the impact of rising wages inequality on labor supply and consumption. Thomas Piketty, Emmanuel Saez and Stefanie Stantcheva have also Included heterogeneous agents in normative models of taxation leading to a reconsideration of important results in that field.

Beyond the RA framework: The agent based approach to macroeconomics

Doyne Farmer at Santa Fe has become one of the leading proponents of the agent based model approach to macroeconomics. An agent-based model is a computerized simulation of a number of decision-makers (agents) and institutions, which interact through prescribed rules. Contrary to standard dynamic economic models, these models do not rely on the assumption that the economy will move towards a predetermined equilibrium state and they do not assume an a priori form of rationality. Behaviors are modeled according to what is observed: researchers thus need a tremendous amount of data in order to identify robust patterns. The models allow for non-equilibrium states and non-linearities:  thus they can generate easily non-market clearing phenomena and endogenous crisis.

In an article in Nature in 2009, Farmer and Foleyclaimed that these models could include financial interactions in a much more complex and realistic way than usual models. Farmer and his team are now developing an agent-based model of the housing market to mimic the current financial crisis. The team collects data on actual people to calibrate a rich model with millions of interacting agents. This is what they call a bottom-up approach to macroeconomics: see here from a more detailed presentation of that approach by INET.

In order to convince most economists, agent based models need to show that the mechanisms modeled in the complex interactions are nevertheless still clear and intuitive and all but a new black box. Richard Serlin points, in particular, that aggregation is a huge challenge for microfounded models, since complex systems often have chaotic properties.

Beyond the RA framework: IKE and Behavioral Macro

Kevin Hoover has recently written what he calls "an Econ 1 (Principles) version of the Imperfect Knowledge Economics" (IKE) developed by Roman Frydman and Michael D. Goldberg which aims to provide an alternative to the representative agent utility. The IKE sees investors as adopting various strategies for forming expectations of future prices. These strategies are not unique, so that there is a distribution of strategies and investors may alter their strategies from time to time. Roman Frydman and Michael D. Goldberg have especially applied their new approach to asset pricing and financial markets. They complain that current models are only “economics of magical thinking”. Behavioral economics has shown that market participants do not act like conventional economists would predict “rational individuals” to act. But according to Frydman and Goldberg it would be also wrong to interpret these empirical findings to mean that many market participants are irrational, prone to emotion, or ignore economic fundamentals for other reasons. People can be rational in different ways depending on the context and information available to them.

Roger Guesnerie has a rather similar view: developing new approaches to rationality and expectations is the promising way that economics should follow in order to build macroeconomics models in the post crisis era.

Paul De Grauwe recently wrote a textbook on Behavioral Macroeconomics. Contrary to mainstream top-down models in which agents are capable of understanding the whole picture and use this superior information to determine their optimal plans, the models used in this book are bottom-up models in which all agents experience cognitive limitations. As a result, these agents are only capable of understanding and using small bits of information. Agents use simple rules of behavior. These models are not devoid of rationality. Agents in these models behave rationally in that they are willing to learn from their mistakes. Importantly, these models produce a radically different macroeconomic dynamics than RA models.


Republishing and referencing

Bruegel considers itself a public good and takes no institutional standpoint. Anyone is free to republish and/or quote this post without prior consent. Please provide a full reference, clearly stating Bruegel and the relevant author as the source, and include a prominent hyperlink to the original post.

Economic Blogs Review

View comments

I meant to note this post from Rajiv Sethi on the microfoundations of macroeconomic models when it was first posted a few weeks ago -- a post that quotes Duncan Foley describing the rational expectations assumption put forward by Lucas and Sargent as "a boring and predictable retracing of an already discredited path" -- but I'm only just getting to it. So let me take advantage of the fact that Rajiv has said it's okay to reprint his posts and highlight it now:

Foley, Sidrauski, and the Microfoundations Project, by Rajiv Sethi: In a previous post I mentioned an autobiographical essay by Duncan Foley in which he describes in vivid detail his attempts to "alter and generalize competitive equilibrium microeconomic theory" so as to make its predictions more consonant with macroeconomic reality. Much of this work was done in collaboration with Miguel Sidrauski while the two were members of the MIT faculty some forty years ago. Both men were troubled by the "classical scientific dilemma" facing economics at the time: the discipline had "two theories, the microeconomic general equilibrium theory, and the macroeconomic Keynesian theory, each of which seemed to have considerable explanatory power in its own domain, but which were incompatible." This led them to embark on a "search for a synthesis" that would bridge the gap.

This is how Duncan describes the basic theoretical problem they faced, the strategies they adopted in trying to solve it, the importance of the distinction between stock and flow equilibrium, and the desirability of a theory that allows for intertemporal plans to be mutually inconsistent in the aggregate (links added):

My intellectual preoccupation at M.I.T. was what has come to be called the "microeconomic foundations of macroeconomics." The general equilibrium theory forged by Walras and elaborated by Wald (1951), McKenzie (1959), and Arrow and Debreu (1954) can be used, with the assumption that markets exist for all commodities at all future moments and in all contingencies, to represent macroeconomic reality by simple aggregation. The resulting picture of macroeconomic reality, however, has several disturbing features. For one thing, competitive general equilibrium is efficient, so that it is incompatible with the unemployment of any resources productive enough to pay their costs of utilization. This is difficult to reconcile with the common observation of widely fluctuating rates of unemployment of labor and of capacity utilization of plant and equipment. General equilibrium theory reduces economic production and exchange to the pursuit of directly consumable goods and services, and as a result has no real role for money... The general equilibrium theory can accommodate fluctuations in output and consumption, but only as responses to external shocks to resource availability, technology or tastes. It is difficult to reconcile these relatively slowly moving factors with the large business-cycle fluctuations characteristic of developed capitalist economies. In assuming the clearing of markets for all contingencies in all periods, general equilibrium theory assures the consistency... of individual consumption, investment, and production plans, which is difficult to reconcile with the recurring phenomena of financial crisis and asset revaluation that play so large a role in actual capitalist economic life...
Keynes' theory, on the other hand, offers a systematic way around these problems. Keynes views money as central to the actual operation of developed capitalist economies, precisely because markets for all periods and contingencies do not exist to reconcile differences in agents' opinions about the future. Because agents cannot sell all their prospects on contingent claims markets, they are liquidity constrained. In a liquidity constrained economy there is no guarantee that all factor markets will clear without unemployed labor or unutilized productive capacity. Market prices are inevitably established in part by speculation on an uncertain future. As a result the economy is vulnerable to endogenous fluctuations as the result of herd psychology and self-fulfilling prophecy. From this point of view it is not hard to see why business cycle fluctuations are a characteristic of a productively and financially developed capitalist economy, nor why the potential for financial crisis is inherent in decentralized market allocation of investment...

But there are many loose ends in Keynes' argument. In presenting the equilibrium of short-term expectations that determines the level of output, income and employment in the short period, for example, Keynes argues that entrepreneurs hire labor and buy raw materials to undertake production because they form an expectation as to the volume of sales they will achieve when the production process runs its course... But Keynes offers no systematic alternative account of how entrepreneurs form a view of their prospects on the market to take the place of the assumption of perfect competition and market clearing. This turns out, in detail, to be a very difficult problem to solve.

Given the supply of nominal money, a fall in prices appears to be a possible endogenous source of increased liquidity. Keynes argues that the money price level is largely determined by the money wage level, but offers no systematic explanation of the dynamics governing the movements of money wages.

Though money is the fulcrum on which his theory turns, Keynes does not actually set out a theory of the economic origin or determinants of money. As a result it is difficult to relate the fluctuations in macroeconomic variables such as the velocity of money to the underlying process of the circulation of commodities.

On point after point Keynes' plausible macroeconomic concepts raise unanswered questions about the microeconomic behavior that might support them.

Thus economics in the late 1960s suffered from a classical scientific dilemma in that it had two theories, the microeconomic general equilibrium theory, and the macroeconomic Keynesian theory, each of which seemed to have considerable explanatory power in its own domain, but which were incompatible. The search for a synthesis which would bridge this gap seemed to me to be a good problem to work on. From the beginning the goal of my work in this area was to alter and generalize competitive equilibrium microeconomic theory so as to deduce Keynesian macroeconomic behavior from it.

In the succeeding years I approached this project from two angles. One was to fiddle with general equilibrium theory in the hope of introducing money into it in a convincing and unified way. The other was to rewrite as much as possible of Keynesian macroeconomics in a form compatible with competitive general equilibrium.
This latter project came to fruition first as a close collaboration with Miguel Sidrauski, and resulted in a book Monetary and Fiscal Policy in a Growing Economy (Foley and Sidrauski, 1971)... Our joint work... sought to develop a canonical model with which it would be possible to analyze the classical problems of the impact of government policy on the path of output of an economy... Following my notion that the price of capital goods are determined in asset markets, and the flow of new investment adjusts to make the marginal cost of investment equal to that price, we assumed a two-sector production system, so that there would be a rising marginal cost of investment. The asset equilibrium of the model is a generalization of Sidrauski's (and Tobin's) portfolio demand theory, which in turn is a generalization of Keynes' theory of liquidity preference. One of my chief goals was to sort out rigorously and explicitly the relation between stock and flow variables, so that we analyzed the model as a system of differential equations in continuous time, a setting in which the difference between stock and flow concepts is highlighted. At each instant asset market clearing of money, bonds, and capital markets in stocks together with labor and consumption good flow market clearing determine the price of capital, the interest rate, the price level, income, consumption and investment. Government policies determining the evolution of supplies of money and bonds together with the addition of investment flows to the capital stock move the model through time in a transparent trajectory. The book considers the comparative statics and dynamics of this model in detail...

Monetary and Fiscal Policy in a Growing Economy had a mixed reception... The fact that we did not derive the asset and consumption demands of households from explicit intertemporal expected utility maximization turned out to be an unfashionable choice for the 1970s, when the economics profession was persuaded to put an immense premium on models of "full rationality." Sidrauski and I were quite aware of the possibility of such a model, which would have been a generalization of his thesis work. At a conference at the University of Chicago in 1968, David Nissen presented a perfect foresight macroeconomic model that made clear that this path would lead directly back to the Walrasian general equilibrium results. Since I didn't believe in the relevance of that path to the understanding of real macroeconomic phenomena, I thought the main point in exploring this line of reasoning was to show how unrealistic its results were...

The project of a macroeconomic theory distinct from Walrasian general equilibrium theory rests heavily on the distinction between stock and flow equilibrium. In Keynes' vision, asset holders are forced to value existing and prospective assets speculatively without a full knowledge of the future. Our model represented this moment through the clearance of asset markets. In the Walrasian vision this distinction is dissolved through the imaginary device of clearing futures and contingency markets which establish flow prices that imply asset prices. The moral of Sidrauski's and my work is that some break with the full Walrasian system along temporary equilibrium lines is necessary as a foundation for a distinct macroeconomics. Once the implications of the stock-flow distinction in macroeconomics became clear, however, the temptation to finesse them by retreating to the Walrasian paradigm under the slogan of "rational expectations" became overwhelming to the American economics profession....

In my view, the rational expectations assumption which Lucas and Sargent put forward to "close" the Keynesian model, was only a disguised form of the assumption of the existence of complete futures and contingencies markets. When one unpacked the "expectations" language of the rational expectations literature, it turned out that these models assumed that agents formed expectations of futures and contingency prices that were consistent with the aggregate plans being made, and hence were in fact competitive general equilibrium prices in a model of complete futures and contingency markets. Arrow and Debreu had made the assumption of the existence of complete futures and contingency markets to give their version of the Walrasian model the appearance of coping with the real-world problems posed by the uncertainty of the future. To my mind, the rational expectations approach amounted to making the perfect-foresight assumptions that I had already considered and rejected on grounds of unrealism in the course of working with Sidrauski... What the profession took to be an exciting breakthrough in economic theory I saw as a boring and predictable retracing of an already discredited path.

To my mind the most appealing feature of the Foley-Sidrauski approach to microfoundations is that it allows for the possibility that individuals make mutually inconsistent plans based on heterogeneous beliefs about the future. This is what the rational expectations hypothesis rules out. Auxillary assumptions such as sticky prices must then be imposed in order to make the models more consonant with empirical observation.

In contrast, the notion of temporary equilibrium (introduced by John Hicks) allows for the clearing of asset markets despite mutually inconsistent intertemporal plans. As time elapses and these inconsistencies are revealed, dynamic adjustments are made that affect prices and production. There is no presumption that such a process must converge to anything resembling a rational expectations equilibrium, although there are circumstances under which it might. The contemporary literature closest to this vision of the economy is based on the dynamics of learning, and this dates back at least to Marcet and Sargent (1989) and Howitt (1992), with more recent contributions by Evans and Honkapohja (2001) and Eusepi and Preston (2008). I am not by any means an insider to this literature but my instincts tell me that it is a promising direction in which to proceed.

Update (11/20). Nick Rowe (in a comment) directs us to an earlier post of his in which the importance of allowing for mutually inconsistent intertemporal plans is discussed. He too argues for an explicit analysis of the dynamic adjustment process that resolves these inconsistencies as they appear through time. It's a good post, and makes the point with clarity.

Some of the comments on Nick's post reflect the view that explicit consideration of disequilibrium dynamics is unnecessary since they are known to converge to rational expectations in some models. My own view is that a lot more work needs to be done on learning before this sanguine claim can be said to have theoretical support. Furthermore, local stability of a rational expectations equilibrium in a linearized system does not tell us very much about the global properties of the original (nonlinear) system, since it leaves open the possibility of corridor stability: instability in the face of large but not small perturbations. (Tobin made a similar point in a paper that I have discussed previously here.)

I also think that allowing for the possibility of mutually inconsistent plans based on heterogeneous beliefs about the future is important, but in general allowing for heterogeneity in multiple agent general equilibrium models is a tough problem to solve. The difficulty is that most of the results we need at the aggregate level no longer hold once we drop the idea of a single, representative agent and replace it with multiple, heterogeneous agents. Alan Kirman has a nice discussion of this point:

the fundamental problem is that the conditions which are known to guarantee ... stability ... cannot be obtained from assumptions on the behavior of the individuals. To be absolutely clear, what Sonnenschein (1972), Mantel (1974), and Debreu (1974) showed is that there is no hope of a general result for stability nor indeed of uniqueness of equilibria, if we wish to build a model based only on individuals who satisfy the standard assumptions on rationality. The full force of the Sonnenschein, Mantel, and Debreu (SMD) result is often not appreciated. Without stability or uniqueness, the intrinsic interest of economic analysis based on the general equilibrium model is extremely limited. ...

Now it is clear why macroeconomists find as the usual way out of this problem the assumption of a representative agent and this obviously generates a unique equilibrium. However, the assumption of such an individual is open to familiar criticisms...

That is, you need to make the representative agent assumption in order to aggregate individuals up to the macroeconomic level and still be able to guarantee uniqueness, stability, or many other properties we need to have a reasonable model. There are some ways to cleverly introduce heterogeneity into standard DSGE models, e.g. through differences in information sets, and still preserve uniqueness and stability after aggregation, but the general problem remains.

Unfortunately, in order to understand something like a financial crisis, heterogeneity is a critical component of the models. For example, to generate the sale of a financial asset by an agent who expects future losses to an agent who expects future gains, differences in expectations about the future are needed. If both agents expect profits or losses identically, then no trade will occur. Thus, with identical agents it's hard to incorporate a financial sector into these models, and the lack of a realistic financial sector in workhorse DSGE models is one of the reasons why macroeconomic models failed to inform us about the crisis.

However, the technical problems associated with introducing the needed heterogeneity in a satisfactory manner have not yet been resolved, and it's not at all clear that they can be resolved within the existing DSGE theoretical framework. That means we will either have to (1) find a way around this problem so as to maintain and preserve the current theoretical tools and techniques, (2) find a new theoretical structure that does not suffer from this problem, e.g. along the lines suggested by Foley, or (3) give up the idea of providing microeconomic foundations for macroeconomic models and begin modeling the aggregate level directly (e.g. see Kirman's discussion of network models).

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *