Loading AI tools
Model used in Macroeconomics From Wikipedia, the free encyclopedia
A macroeconomic model is an analytical tool designed to describe the operation of the problems of economy of a country or a region. These models are usually designed to examine the comparative statics and dynamics of aggregate quantities such as the total amount of goods and services produced, total income earned, the level of employment of productive resources, and the level of prices.
Macroeconomic models may be logical, mathematical, and/or computational; the different types of macroeconomic models serve different purposes and have different advantages and disadvantages.[1] Macroeconomic models may be used to clarify and illustrate basic theoretical principles; they may be used to test, compare, and quantify different macroeconomic theories; they may be used to produce "what if" scenarios (usually to predict the effects of changes in monetary, fiscal, or other macroeconomic policies); and they may be used to generate economic forecasts. Thus, macroeconomic models are widely used in academia in teaching and research, and are also widely used by international organizations, national governments and larger corporations, as well as by economic consultants and think tanks.
Simple textbook descriptions of the macroeconomy involving a small number of equations or diagrams are often called ‘models’. Examples include the IS-LM model and Mundell–Fleming model of Keynesian macroeconomics, and the Solow model of neoclassical growth theory. These models share several features. They are based on a few equations involving a few variables, which can often be explained with simple diagrams.[2] Many of these models are static, but some are dynamic, describing the economy over many time periods. The variables that appear in these models often represent macroeconomic aggregates (such as GDP or total employment) rather than individual choice variables, and while the equations relating these variables are intended to describe economic decisions, they are not usually derived directly by aggregating models of individual choices. They are simple enough to be used as illustrations of theoretical points in introductory explanations of macroeconomic ideas; but therefore quantitative application to forecasting, testing, or policy evaluation is usually impossible without substantially augmenting the structure of the model.
In the 1940s and 1950s, as governments began accumulating national income and product accounting data, economists set out to construct quantitative models to describe the dynamics observed in the data.[3] These models estimated the relations between different macroeconomic variables using (mostly linear) time series analysis. Like the simpler theoretical models, these empirical models described relations between aggregate quantities, but many addressed a much finer level of detail (for example, studying the relations between output, employment, investment, and other variables in many different industries). Thus, these models grew to include hundreds or thousands of equations describing the evolution of hundreds or thousands of prices and quantities over time, making computers essential for their solution. While the choice of which variables to include in each equation was partly guided by economic theory (for example, including past income as a determinant of consumption, as suggested by the theory of adaptive expectations), variable inclusion was mostly determined on purely empirical grounds.[4]
Dutch economist Jan Tinbergen developed the first comprehensive national model, which he built for the Netherlands in 1936. He later applied the same modeling structure to the economies of the United States and the United Kingdom.[3] The first global macroeconomic model, Wharton Econometric Forecasting Associates' LINK project, was initiated by Lawrence Klein. The model was cited in 1980 when Klein, like Tinbergen before him, won the Nobel Prize. Large-scale empirical models of this type, including the Wharton model, are still in use today, especially for forecasting purposes.[5][6][7]
Econometric studies in the first part of the 20th century showed a negative correlation between inflation and unemployment called the Phillips curve.[8] Empirical macroeconomic forecasting models, being based on roughly the same data, had similar implications: they suggested that unemployment could be permanently lowered by permanently increasing inflation. However, in 1968, Milton Friedman[9] and Edmund Phelps[10] argued that this apparent tradeoff was illusory. They claimed that the historical relation between inflation and unemployment was due to the fact that past inflationary episodes had been largely unexpected. They argued that if monetary authorities permanently raised the inflation rate, workers and firms would eventually come to understand this, at which point the economy would return to its previous, higher level of unemployment, but now with higher inflation too. The stagflation of the 1970s appeared to bear out their prediction.[11]
In 1976, Robert Lucas Jr., published an influential paper arguing that the failure of the Phillips curve in the 1970s was just one example of a general problem with empirical forecasting models.[12][13] He pointed out that such models are derived from observed relationships between various macroeconomic quantities over time, and that these relations differ depending on what macroeconomic policy regime is in place. In the context of the Phillips curve, this means that the relation between inflation and unemployment observed in an economy where inflation has usually been low in the past would differ from the relation observed in an economy where inflation has been high.[14] Furthermore, this means one cannot predict the effects of a new policy regime using an empirical forecasting model based on data from previous periods when that policy regime was not in place. Lucas argued that economists would remain unable to predict the effects of new policies unless they built models based on economic fundamentals (like preferences, technology, and budget constraints) that should be unaffected by policy changes.
Partly as a response to the Lucas critique, economists of the 1980s and 1990s began to construct microfounded[15] macroeconomic models based on rational choice, which have come to be called dynamic stochastic general equilibrium (DSGE) models. These models begin by specifying the set of agents active in the economy, such as households, firms, and governments in one or more countries, as well as the preferences, technology, and budget constraint of each one. Each agent is assumed to make an optimal choice, taking into account prices and the strategies of other agents, both in the current period and in the future. Summing up the decisions of the different types of agents, it is possible to find the prices that equate supply with demand in every market. Thus these models embody a type of equilibrium self-consistency: agents choose optimally given the prices, while prices must be consistent with agents’ supplies and demands.
DSGE models often assume that all agents of a given type are identical (i.e. there is a ‘representative household’ and a ‘representative firm’) and can perform perfect calculations that forecast the future correctly on average (which is called rational expectations). However, these are only simplifying assumptions, and are not essential for the DSGE methodology; many DSGE studies aim for greater realism by considering heterogeneous agents[16] or various types of adaptive expectations.[17] Compared with empirical forecasting models, DSGE models typically have fewer variables and equations, mainly because DSGE models are harder to solve, even with the help of computers.[18] Simple theoretical DSGE models, involving only a few variables, have been used to analyze the forces that drive business cycles; this empirical work has given rise to two main competing frameworks called the real business cycle model[19][20][21] and the New Keynesian DSGE model.[22][23] More elaborate DSGE models are used to predict the effects of changes in economic policy and evaluate their impact on social welfare. However, economic forecasting is still largely based on more traditional empirical models, which are still widely believed to achieve greater accuracy in predicting the impact of economic disturbances over time.
A methodology that pre-dates DSGE modeling is computable general equilibrium (CGE) modeling. Like DSGE models, CGE models are often microfounded on assumptions about preferences, technology, and budget constraints. However, CGE models focus mostly on long-run relationships, making them most suited to studying the long-run impact of permanent policies like the tax system or the openness of the economy to international trade.[24][25] DSGE models instead emphasize the dynamics of the economy over time (often at a quarterly frequency), making them suited for studying business cycles and the cyclical effects of monetary and fiscal policy.
Another modeling methodology is Agent-based computational economics (ACE), which is a variety of Agent-based modeling.[26] Like the DSGE methodology, ACE seeks to break down aggregate macroeconomic relationships into microeconomic decisions of individual agents. ACE models also begin by defining the set of agents that make up the economy, and specify the types of interactions individual agents can have with each other or with the market as a whole. Instead of defining the preferences of those agents, ACE models often jump directly to specifying their strategies. Or sometimes, preferences are specified, together with an initial strategy and a learning rule whereby the strategy is adjusted according to its past success.[27] Given these strategies, the interaction of large numbers of individual agents (who may be very heterogeneous) can be simulated on a computer, and then the aggregate, macroeconomic relationships that arise from those individual actions can be studied.
DSGE and ACE models have different advantages and disadvantages due to their different underlying structures. DSGE models may exaggerate individual rationality and foresight, and understate the importance of heterogeneity, since the rational expectations, representative agent case remains the simplest and thus the most common type of DSGE model to solve. Also, unlike ACE models, it may be difficult to study local interactions between individual agents in DSGE models, which instead focus mostly on the way agents interact through aggregate prices. On the other hand, ACE models may exaggerate errors in individual decision-making, since the strategies assumed in ACE models may be very far from optimal choices unless the modeler is very careful. A related issue is that ACE models which start from strategies instead of preferences may remain vulnerable to the Lucas critique: a changed policy regime should generally give rise to changed strategies.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.