+ All Categories
Home > Documents > The Warwick ESRC macroeconomic modelling bureau: An assessment

The Warwick ESRC macroeconomic modelling bureau: An assessment

Date post: 22-Nov-2016
Category:
Upload: ron-smith
View: 214 times
Download: 0 times
Share this document with a friend
9
International Journal of Forecasting 6 (1990) 301-309 North-Holland 301 The Warwick ESRC Macroeconomic Modelling Bureau: An assessment Ron Smith * Department of Economics, Birkbeck College, London WIP IPA, United Kingdom Abstract: In the UK an independent institution, the Macroeconomic Modelling Bureau, is responsible for evaluating and monitoring the large macro-econometric models used for forecasting and policy analysis. This paper assesses the Bureau’s record in model comparison, improving access to the models, method- ological innovation and improving modelling standards. Keywords: Macroeconometric modelling. 1. Introduction The UK is somewhat unusual in that a public body, the Economic and Social Research Council (ESRC, called the Social Science Research Coun- cil, SSRC, prior to 1984) has been a major source of funds for the construction and development of the large macro-econometric models used for fore- casting and policy analysis. The UK is even more unusual in having an independent organisation to monitor and evaluate these models. In 1983, after a competition, the SSRC established a Macroeco- nomic Modelling Bureau at Warwick University headed by K.F. Wallis. The Bureau was commis- sioned to make the models accessible and to con- duct comparative and methodological research on them. It has now been in operation for six years, so there is sufficient evidence to attempt an assess- ment. For clarity, and to distinguish the Bureau’s publications from the other references, Bureau publications are cited by numbers in square brac- kets. More general discussions of the development of modelling in the UK can be found in [15], Ball and Holly (1989) and Kenway (1989). * I am very grateful for the large number of helpful comments I received on an earlier draft of this paper, from the Bureau, model proprietors and others. Errors and opinions remain mine. 2. Background As introduction it is worth considering the his- tory of the problems to which the Bureau was intended to provide a solution. Its establishment was partly a response to the wave of criticism that hit large models in the late 1970s though the form of the response in the UK reflected competition for resources. Under the old SSRC institutional arrangements model groups, such as the London Business School (LBS), National Instititute (NIESR) and Cambridge Growth Project (CGP), had competed with standard academic economic research for funds. No distinction was made be- tween this and other types of research, despite the fact that the model teams absorbed a large pro- portion of the funds available for research. This created concern both about how to compare the large models with other forms of research and how to compare the models with each other, since each team came up for funding at a different time. A group, under Michael Posner, then Chairman of the SSRC, recommended that funding of the mod- els should be provided for separately from other research through a Macroeconomic Consortium. This would organise rounds of competitions be- tween the teams (SSRC, 1981). Criticisms of mod- elling in the UK also led the group to recommend 0169-2070/90/$03.50 0 1990 - Elsevier Science Publishers B.V. (North-Holland)
Transcript
Page 1: The Warwick ESRC macroeconomic modelling bureau: An assessment

International Journal of Forecasting 6 (1990) 301-309

North-Holland

301

The Warwick ESRC Macroeconomic Modelling Bureau: An assessment

Ron Smith * Department of Economics, Birkbeck College, London WIP IPA, United Kingdom

Abstract: In the UK an independent institution, the Macroeconomic Modelling Bureau, is responsible for evaluating and monitoring the large macro-econometric models used for forecasting and policy analysis. This paper assesses the Bureau’s record in model comparison, improving access to the models, method- ological innovation and improving modelling standards.

Keywords: Macroeconometric modelling.

1. Introduction

The UK is somewhat unusual in that a public body, the Economic and Social Research Council (ESRC, called the Social Science Research Coun- cil, SSRC, prior to 1984) has been a major source of funds for the construction and development of the large macro-econometric models used for fore- casting and policy analysis. The UK is even more unusual in having an independent organisation to monitor and evaluate these models. In 1983, after a competition, the SSRC established a Macroeco- nomic Modelling Bureau at Warwick University headed by K.F. Wallis. The Bureau was commis- sioned to make the models accessible and to con- duct comparative and methodological research on them. It has now been in operation for six years, so there is sufficient evidence to attempt an assess- ment. For clarity, and to distinguish the Bureau’s publications from the other references, Bureau publications are cited by numbers in square brac- kets. More general discussions of the development of modelling in the UK can be found in [15], Ball and Holly (1989) and Kenway (1989).

* I am very grateful for the large number of helpful comments I received on an earlier draft of this paper, from the Bureau, model proprietors and others. Errors and opinions remain

mine.

2. Background

As introduction it is worth considering the his- tory of the problems to which the Bureau was intended to provide a solution. Its establishment was partly a response to the wave of criticism that hit large models in the late 1970s though the form of the response in the UK reflected competition for resources. Under the old SSRC institutional arrangements model groups, such as the London Business School (LBS), National Instititute (NIESR) and Cambridge Growth Project (CGP), had competed with standard academic economic research for funds. No distinction was made be- tween this and other types of research, despite the fact that the model teams absorbed a large pro- portion of the funds available for research. This created concern both about how to compare the large models with other forms of research and how to compare the models with each other, since each team came up for funding at a different time. A group, under Michael Posner, then Chairman of the SSRC, recommended that funding of the mod- els should be provided for separately from other research through a Macroeconomic Consortium. This would organise rounds of competitions be- tween the teams (SSRC, 1981). Criticisms of mod- elling in the UK also led the group to recommend

0169-2070/90/$03.50 0 1990 - Elsevier Science Publishers B.V. (North-Holland)

Page 2: The Warwick ESRC macroeconomic modelling bureau: An assessment

302 R. Smrth / The Wanwck ESRC Mocroeconom~c Modding Bureau

the establishment of a Bureau, an independent the SSRC to make more informed funding deci- centre for evaluation and comparison. There were sions. Paragraph 26 of SSRC (1981) encapsulated three types of criticism. this instrumentalist objective.

The Fundamentalist critique was that the mod- els were not a worthwhile method of research: “little in the way of scientific knowledge is to be gained from the construction of large-scale models over what can be learned by other means. At present at least, there are very few spin-offs into academic advance” (Deaton, 1981). This view, which seemed to be the consensus of academic economists, did not deny that these models might be useful to Government or Business, but to the extent they were useful, they should be funded directly by Government or Business not out of scarce academic research budgets.

The terms of reference for the Bureau were largely Reformist. It was not to ask whether the activity was worthwhile nor which model was the best. Instead it was to conduct comparisons of the models, increase public access to the publicly funded models, develop the relevant analytical techniques and improve standards in the area. I will examine its work under these headings: com- parison, access, analysis, standards. It conducted this work with a basic staff of a part-time director and four researchers. In 1985586 it cost f109,OOO out of a Macroeconomic Modelling Consortium Budget of &958,000.

The Reformist critique accepted that, in princi- ple, the activity was worthwhile, but that it was being done badly in the UK. In particular, the teams failed to provide information about their models (as distinct from providing forecasts and policy analysis) and failed to evaluate them prop- erly. For instance, much of the systems evaluation that was common for US models and described by Klein in Ormerod (1979) had not been applied to UK models. On the basis of a survey conducted in 1976, Jeremy Bray concluded: “The testing of UK models falls short of the practices recommended by Klein” (Ormerod, 1979, p. 341). And Budd (Ormerod, 1979, p. 15) commented that while some regarded such practices “as an ideal to which all should strive”, others regarded them “as a minimum to which all should conform”. The de- ficiencies in the information and evaluation pro- vided by the teams for ad hoc exercises (e.g. Posner, 1978; Ormerod, 1979; and the subsequent Model Study Group Seminar) together with the difficulties that outsiders, such as Artis (1982), faced in conducting comparisons or explaining why forecasts differed, prompted demands for a central institution to carry out this work and raise standards.

3. Comparison

The Bureau’s method of operation is to take deposit from each team of a model, data base and autumn forecast towards the end of each year. Initially there were five ESRC financed models:

LBS, NIESR, CGP, Liverpool (LPL), and City University Business School (CUBS); there are now three since support for CGP and CUBS was withdrawn in 1987. In addition, the finance ministry (Her Majesty’s Treasury, HMT) and the central bank (Bank of England, BE) models are deposited and other models can be accepted if the proprietors wish, and the commercially-based Ox- ford Economic Forecasting model has recently been received. The models are installed on the Bureau’s computer system, the forecast replicated, and various comparisons conducted. These evalua- tions are reported at a Conference in July of the following year which most of the UK macro-mod- elling community (but relatively few other acad- emic economists) attend. After comment from the teams the results are published.

The Instrumentalist critique was that outside decision makers, who had little interest in the technical detail but who wished to use the fore- casts and policy analysis from the models, had difficulty in choosing between their conflicting outputs. What was needed was an academic equiv- alent of the Consumer Association which would test and certify the models and recommend which was the best buy. Such advice would also enable

The Bureau did not, as some had predicted, get bogged down in the substantial routine admin- istrative and housekeeping activities involved, such as liaison, updating and programming, neither did it fail to establish reasonable working relations with the teams, as some had feared. In fact, it proved extremely efficient at solving the myriad practical and political problems involved in model comparison exercises. After being set up in Sep-

Page 3: The Warwick ESRC macroeconomic modelling bureau: An assessment

R. Smith / The Warwick ESRC Mucroeconomic Modellmg Bureau 303

tember 1983, it had the skills and facilities availa-

ble to produce the first book - Models of the UK Economy [l] - by the end of 1984.

The book contained a very clear introduction to the theory and practice of modelling; a compara- tive review of model properties in terms of dy- namic multipliers; and an examination of why the forecasts differed. The difference between the forecasts was explained by recomputing them on common assumptions about exogenous variables and judgemental adjustments. Suppose the fore- cast of group i is j,(a,, a,), where 2, is the team prediction for the exogenous variables and a, their judgmental adjustment. The model component of the difference between team i and j’s forecast is then j,(X, 0) - i;(X, 0), where X is a standardised projection for the exogenous variables and the judgemental adjustment is removed. This analysis [2, p. 1011 tended to show that differences in the exogenous variables accounted for relatively little of the difference in forecast, and that the judge- mental adjustments tended to offset model dif- ferences and bring the forecasts closer together.

Ex post analyses of ex ante forecasting perfor- mance was added once actual data for the fore- casts deposited in 1983 became available. Ex ante forecasts can be unrevealing about model quality since they are a product of both model and team. Being able to re-run the models under different assumptions enabled the Bureau to separate these influences. Suppose the published forecast is j(.?, a); the ex post forecast re-runs the model replacing 2 by X; the hands-off forecast replaces the judgemental adjustment by zero. So the error in the published forecast (y - p(x, a)) equals the pure model error in the hands-off, ex post, fore- cast (y -9(x, 0)); less the contribution of judge- ment ( j(x, a) - j(x, 0)); plus the effect of exoge- nous uncertainty (j(x, a) - j(_?, a)). This de- composition is not perfect, since the model results may interact with the judgemental adjustment and choice of exogenous variables; but it is a major improvement on what was available. Such com- parative information, which is essential to assess- ing the role of the alternative models is not availa- ble elsewhere in the world. An overview of the sources of error in forecasts for 1984-88 is pro- vided in [34]. This, and other work [23], also led to much better understanding of how judgement was used, which McNees (1982, p. 39) had described as “an important but neglected subject”.

The first book also contained more detailed comparison of the treatment of the labour market and the impact of monetary and fiscal policies in the models. This format - methodological discus- sion, comparative properties, forecast analysis, more specialised sector studies - was maintained and using standard procedures each year enabled the evolution of the models and their convergence or divergence to be studied in a consistent way over time.

For the first four years the results were pub- lished in a book which came out about the end of the year following deposit of the models [1,2,3,4]. Subsequently the main comparison of dynamic multipliers was published annually in the National Institute Economic Review [28,29], and more em- phasis was placed on publishing in standard academic journals. In this they have been success- ful, the majority of their discussion papers being published in independent outlets.

In general, the results of the forecast analysis was in accord with previous, largely US, research. Judgemental adjustment nearly always reduced forecast error, primarily by bringing the models back on track; forecasts were occasionally much worse using the true values of the exogenous vari- ables than the projections; no model’s forecasts unambiguously dominated the others over all time periods and variables; and when confidence inter- vals were computed using stochastic simulation, the difference between the alternative forecasts was not statistically significant. Models which provided very similar forecast performance could provide very different policy multipliers. For com- parison, the Bureau also provided mechanical forecasts from a Bayesian Vector Autoregression. Overall, the teams outperformed this, though the margin was quite small when the model did not have the benefit of judgemental adjustment. [26] reviews the UK forecasting record and presents many of the results of the Bureau’s analysis.

Contrary to some expectations, establishing comparability in the analysis was not a major obstacle to comparison. This is partly because the models, particularly the quarterly ones, do not differ much in their choice of exogenous variables, but partly because the Bureau has showed consid- erable ingenuity in making adjustments to estab- lish a common base. Conducting the simulations themselves gives them a considerable advantage over projects which have to rely on the teams to

Page 4: The Warwick ESRC macroeconomic modelling bureau: An assessment

304 R. Smith / The Warwick ESRC Macroeconomic Modefling Bureuu

conduct them. For instance, in such a project Bryant et al. (1988, p. 30) comment: “Unfor- tunately, many of the model groups could not or did not follow the instructions for the baseline path precisely”. This created difficulties in estab- lishing comparability and meant that many of the results could only be regarded as illustrative.

The style of Bureau publications is dry, judi- cious and non-judgemental, confining themselves to reporting the results of the comparison. This style has contributed to its ability to maintain reasonable working relationships with the teams, and a proprietor, Britton (1988, p. 130) comments on their “considerable tact and diplomacy”. But if one wishes to know the gossip, one has to read between the lines to infer which forecasts used models different from that published or which models would not solve without judgmental ad- justment. The Bureau has become somewhat more explicitly critical over time as its authority in- creased and it accumulated econometric evidence on model performance.

4. Access

When the Bureau installed the models on their system at Warwick, they also provided a specially written interface program and facilities to allow other economists to use them over the standard academic computer network. This was initially greeted with some enthusiam and quite large num- bers signed up to use it. However, the investment of time required by remote users trying to run a large model, with which they were unfamiliar, rapidly discouraged most academics, and this facility was not widely used. A Unit was also set up for a time at Warwick, in association with the Bureau, to conduct policy simulations for Parlia- mentary Committees.

One initiative was effective at achieving wide dissemination of the models. The Bureau took the dynamic multipliers they calculated as part of their comparisons and constructed a ‘Ready Reck- oner’ which would run on a standard PC. This was essentially a linearised reduced form for the mod- els around a forecast base. It allowed forecasts and simulations of the effect of changing a dozen policy and world exogenous variables. Any simu- lation could be run on all the models and the forecasts and multipliers compared graphically. It

is a very effective teaching aid and quite useful for some forecasting applications. It is described in [let] and reviewed in Dawson (1989).

The books were valuable in explaining the arcane jargon of the UK modellers (Type I & Type II fixes and the rest) to a wider community. One of the Bureau’s objectives was to get mainstream econometricians and macro-econo- mists interested in the special problems of large models. They made considerable effort in this direction and had some successes, but in general the activity remains peripheral to most academic economists. Of course, it is not necessarily the case that more information about the models and better understanding of them would make academic economists more sympathetic to them.

5. Other analysis

In a range of studies, the Bureau used simula- tions of the models available to them to examine policy relevant questions [lCh5,2Chs4&5,3Ch5, 4Chs3&5,7,11,12,16,20,22,31,33]. Most of these studies also contain methodological arguments. For instance, [7] criticised the prevalent practice of calculating wage-employment elasticities by perturbing the wage equation, without specifying the mechanism whereby wages, an endogenous variable, were changed, or even whether a demand or supply shock was involved. Such simulations, involving unexplained shocks to endogenous vari- ables, have been labelled ‘if-only’ rather than ‘what-if’ by the Bureau. [16,27] contain general reviews of the use of models in policy analysis.

They also did a range of more basic method- ological research on techniques, some involving non-Bureau academics. Since most of the models use rational expectations, research on topics such as sensitivity to terminal conditions, solutions techniques and design of simulations under RE has been important and they have developed a range of new procedures (e.g. [35,36]). Other topics examined include: the importance of non-linearity [6]; efficient solutions to the ragged edge problem [9]; long-run properties of models [13,25]; the role of dynamic simulation [lo]; and numerical analy- sis [5,8,14,17,32]. The last of these is an example of ‘infra-structural’ work that the Bureau has con- ducted. Although many applications, p~ticularly rational expectations, optimal control and sto-

Page 5: The Warwick ESRC macroeconomic modelling bureau: An assessment

R. Smith / The Warwick ESRC Macroeconomic Modelfing Bureau 305

chastic simulations, depend on efficient solution techniques to be practicable, relatively little prior- ity has been given to methods of computation within economics, Within the model teams skills in computation varied widely and the Bureau has tried to improve practice in this area.

Initially the Bureau did relatively little econo- metric research, but later this became a more important part of their activity [3Ch5,4Ch5,19,24]. Their general approach was to take the equations for a particular variable or sub-system from each model; provide a general economic theoretic framework within which the alternatives could be placed; make the adjustments necessary to convert the alternatives to a common base; re-estimate them on comparable data and sample; and subject them to a large range of misspecification tests, including whether the long-run solutions corre- sponded to a cointegrating vector. Each would then be nested in a general equation, against which it would be tested. Specification search from this general model would be used to select a preferred equation, which might not be one of those used by the teams. The preferred equation would be sub- stituted in all the models and the standard simula- tions compared. The degree to which differences in that equation could explain the differences in simulation results could then be assessed. As ex- amples of well documented, state of the art, specification testing, these are also valuable in teaching econometrics.

6. Impact on UK modelling standards

It is difficult to assess the impact of an organi- sation on general practice; models change for a variety of reasons and the set of teams has changed. However, my impression is that the Bureau has not had a large impact on the stan- dards of modelling practice. Practice has im- proved, but at about the same rate and in the same directions one would have expected from previous trends. The type of things that teams tend to be good about (e.g. equation innovation) and bad about (e.g. systems validation) are pretty much the same as before the Bureau’s establish- ment. However, others can assess for themselves whether the evidence set out below rejects the null hypothesis of no impact.

While one would not expect model proprietors to bite the hand that assesses them, they have been positive about the impact of the Bureau. LBS: “The Bureau has helped raise the standard of macroeconomic modeiling in the UK”, Holly (1989, p. 861). NIESR: “The Bureau’s influence on our programme has been considerable, if indi- rect, and . . . entirely beneficial”, Britton (1988, p. 130).

Housekeeping by the teams has certainly im- proved. Knowing that the Bureau will try to repli- cate results does provide an incentive for the team to carry out all those checks they always meant to do, but never had the time; e.g. make sure that the code was right, that the model had converged, that it corresponded to the documentation, that the identities added up, etc. On occasion the Bureau have revealed to proprietors peculiar properties of their models of which they did not seem to be aware. Documentation and reporting of equation diagnostic statistics have also improved.

While diffusion of equation innovations across the models was always quite rapid, the Bureau and some proprietors believe that the speed with which new econometric methods are adopted by some teams has increased. But Bureau criticisms are also sometimes ignored. For instance, on method- ological grounds Liverpool have not re-estimated their model despite the Bureau’s repeated observa- tion that the equations lack empirical support. There is also little evidence of convergence be- tween the models [29, p. 841. On resource alloc- ation, I have no information as to whether the 1987 ESRC model funding decisions were in- fluenced by the Bureau or its work, [1,2] were then available.

Before the establishment of the Bureau, teams were criticised for not conducting certain sorts of evaluation, particularly historical simulations. They still do not conduct them, but the excuse has changed. Then they said such procedures were extremely difficult to conduct, now they say that the Bureau does them. Since the teams did not do it then, we do not have pre-Bureau evaluation data, thus we cannot judge whether the models have improved as descriptions of the economy.

The Bureau’s historical static simulations [21] suggest that the models, as descriptions of the economy, suffer severe problems. Over the period 1978-85, which overlaps the estimation period of most of the models, the Theil inequality coeffi-

Page 6: The Warwick ESRC macroeconomic modelling bureau: An assessment

cient, the Root Mean Square Error for a ‘hands- off’, ex post, static simulation relative to that of a no change forecast, was less than unity for only ten out of the 24 cases (six models, four variables: unemployment, inflation, exchange rate and growth). Even excluding the exchange rate only in nine out of 18 cases do the models outperform a random walk. This is despite the technical innova- tions over the last few years, which include ra- tional expectations, which are almost universal in UK models, more extensive diagnostic testing and improved dynamic specification. The Theil coeffi- cients for real, ex ante, forecasts are smaller than this, but this is because of the contribution of judgement, not of the model.

Inability to model the exchange rate is a prob- lem, but the Bureau comment (21, p. 311 “Irre- spective of this particuIar difficulty, one might ask why it is that models that are in such regular use in forecasting and policy analysis exercises are so poorly validated historically. It would appear that, whereas individual equations are often thoroughly tested against the historical record, once a satis- factory specification at that level is obtained and the equation inserted into the complete model, attention is concentrated on the use of the model in ‘forward-looking’ forecasting and policy analy- sis. These may be the more pressing problems facing the model proprietor, and ‘backward look- ing’ historical validation, system-wide, is corre- spondingly neglected. When the model does not perform well historically, however, and no good explanation of its failure is available, the reliabil- ity of analysis based on the model is called into question.” This criticism is very similar to that made in SSRC (1981).

A recent example of the use of the models for such policy analysis is Britton (1989). It reports a conference marking the tenth anniversary of the publication of Posner (1978) the conference which was partly responsible for prompting the review of UK macro modelling. The Bureau chapter in Brit- ton (1989) led one reviewer, Holden (1989, p. 862), to question the usefulness of the results in all the other chapters, and conclude the review by saying: “The basic question of whether policy simulations give insights on the real world or just demonstrate the properties of the models (and the beliefs of the model builders) remains to be answered.”

This raises a wider question about the worth of the large macro models. Any assessment of the

Bureau must take this into account: we might think that an ESRC Astrological Prediction Bureau was a waste of money, however well it discharged its functions. However, the value of macro modell- ing is a big and disputed question. Although, it is not the way it is usually considered, analysis of their value-added requires consideration of the economics of the modelling industry and examina- tion of such factors as the characteristics of the product; the sources of demand; the pattern of competition; costs of production; and the incen- tives that the suppliers, the model teams, face. Such an industriaf analysis may well suggest that there is a case for publicly funded testing of the product and process to maintain standards, as in some other industries.

In any event, my impression is that the fundamentalist critique is still prevalent among academic economists. For instance, Mankiw (1988). in discussing the relation between the large models and modern macro theory, makes a com- parison with Ptolemaic and Copernican astron- omy. The heliocentric Copernican system was more elegant and eventually more useful, but for a considerable time after its proposal it was inferior to the geo-centric Ptolemaic system at predicting the position of the planets. Thus the latter was retained for navigation and other practical pur- poses. Of course, an analogy in which modern macroeconomics shows the promise of Copernican astronomy while large-modellers add epicycles to a temporarily useful but ultimately degenerate re- search program, is more likely to appeal to an academic macroeconomist than to the proprietor of a model.

The Bureau, perhaps wisely, has not taken an explicit position on the value of macro modelling as an activity, even when analysing the role of models in macro-economic research [30]. Their implicit position seems close to that of Higgins (Bryant et al., 1988, p. 294) who concludes that “a formal and quantified framework is an irreplacea- ble adjunct to the processes of policy thought”. There is no alternative to large models for a range of macro-economic forecasting and policy ques- tions, therefore it is important that they are con- structed and evaluated as carefully as possible. Thus the objectives of the Bureau’s research pro- gramme have been: “Through a systematic anaiy- sis of models and forecasts, detailed scrutiny of their economic and statistical foundations, and a

Page 7: The Warwick ESRC macroeconomic modelling bureau: An assessment

R. Smrlh / The Wanvrck ESRC Macroeconomrc Modding Bureau 307

critical appraisal of the technical methods em- are incentives for teams to co-operate and to be

ployed, it contributes to an improved understand- seen to meet “industry standards” and proprietors

ing of the UK economy and the models thereof, have voluntarily deposited their model with the

and to improved modelling procedures.” [4, p. 11. Bureau.

7. Conclusion

Given that large modelling is regarded as a worthwhile activity which the ESRC should fi- nance, there is little doubt as to the usefulness of the task set the Bureau. For instance Helliwell (Bryant et al., 1988, p. 137) discussing interna- tional models says “To make better use of the existing models and to make better informed deci- sions about the design of future research strate- gies, it is worthwhile to invest more care in ana- lytical and simulation work”; the research pro- gramme he then suggests is almost exactly that carried out by the Bureau. But should this task be done by the teams or is a central organisation needed? The difficulties and deficiencies of ad hoc comparative conferences have already been noted, and they tend not to promote feedback and fol- low-up. Model listings and the documentation provided by the teams can be uninformative to outsiders. The teams have relatively little incentive to do such evaluation and comparison, and even if they could or would do it for themselves, which is unlikely, there are gains from consistency, com- parability and perhaps from economies of scale by

having exercises done centrally. Thus, as a rela- tively small proportion of the public sector money spent on macro-models in the UK, the Bureau represents a good investment. What is known about the comparative systems properties of UK models is largely known because of the Bureau’s work.

There is no doubt that the Bureau did, very successfully, all that was asked of it and more. It has produced a large volume of high quality pub- lished output; moved the UK from being a lag- gard in large model evaluation and comparison to a leader; generated a set of results which are an essential resource to any study of large models; and introduced a range of methodological innova- tions. Parke (1987) comments “Current tech- niques for evaluating and comparing macroeco- nomic models are a curious combination of scien- tific enquiry and ad hoc calculations.” The Bureau’s work on experimental design has made these techniques much more scientific.

References

Bureau publications (where an earlier version of a published article was produced as Discussion Paper i, it is denoted { DPi })

[I] Walk, K.F.. Ed., M.J. Andrew, D.N.F. Bell, P.G. Fisher and J.D. Whitley. 1984, Models o/the UK Economy. Vol. I

(Oxford University Press, Oxford).

[2] Wallis, K.F.. Ed.. M.J. Andrew. D.N.F. Bell, P.G. Fisher

and J.D. Whitley, 1985, Models o/the UK Economy. Vol. 2

(Oxford University Press. Oxford).

(31 Wallis. K.F., Ed., M.J. Andrew. P.G. Fisher. J.A. Long-

[41

Whether the Bureau as a model is exportable to other countries, is a more difficult question. Gregory (1988) discussing whether it could be applied in Australia emphasises that sufficient funding to cover the fixed costs, a degree of coop- eration from the model proprietors, and a very capable team would all be required to make it effective. Partly because of the public sector na- ture of model funding these conditions were met in the UK, they might be difficult to meet elsewhere, and Gregory concludes that they are unlikely to be met in Australia. However, once an organisation like the Bureau is established there

[51

[61

[71

IX1

bottom and J.D. Whitley. 1986. Models of fhe UK Econ-

omy. Vol. 3 (Oxford University Press. Oxford).

Wallis, K.F.. Ed., P.G. Fisher, J.A. Longbottom, D.S.

Turner and J.D. Whitley, 1987, Models o/ the UK Econ-

omy, Vol. 4 (Oxford University Press, Oxford).

Hughes Hallett, A.J., 1984, “Multiparameter extrapola-

tion and deflation methods for solving equation systems”,

Inrernutional Journal of Muthemufics und Marhematud

Sciences, 7. 793-802 (DPI ).

Fisher, P.G. and M. Salmon, 1986 “On evaluating the

importance of non-linearity in large macroeconometric

models”, Inlernulronul Economic Reoiew. 27. 625-646

(DP2).

Andrew, M.J.. D.N.F. Bell, P.G. Fisher, K.F. Wallis, and

J.D. Whitley. 1985. “Models of the UK economy and the real wage-employment debate”. Nutionul Institute Eco-

nomrc Reurew, no 112, 41-52 (DP3).

Fisher, P.G.. S. Holly and A.J. Hughes Hallett, 1986.

“Efficient solution techniques for dynamic nonlinear ra-

tional expectations models”. Journal of Economic Dv-

nomics und Control. 10. 139- 145 ( DP4)

Page 8: The Warwick ESRC macroeconomic modelling bureau: An assessment

308 R. Smith / The Warwick ESRC Macroeconomic Modelling Bureau

[9] Wallis, K.F., 1986, “Forecasting with an econometric

model: The ragged edge problem”, Journal o/Forecasting,

5, 1-14 (DP5).

[IO] Pagan, A., 1989, “On the role of simulation in the statisti-

cal evaluation of econometric models”, Journal of Econo-

metrics, 40, 125-139 (DP6).

[ll] Fisher, P.G., K.F. Wallis and J.D. Whitley, 1986, “Fi-

nancing rules and output variability: Evidence from UK

macroeconomic models”, May (DP7)

[12] Whitley, J.D. and R.A. Wilson, 1988, “Hours reductions

within large-scale macroeconomic models: Conflict be-

tween theory and empirical application”, in: R.A. Hart,

ed.. Employment Unemployment and Lubour Utihsatlon,

(Unwin Hyman, London) 228-252 (DP8).

[13] Wallis, K.F. and J.D. Whitley, 1987, “Long-run properties

of large-scale macroeconometric models”, Annales

d’Economre et de Statistique, 6/7, 207-224 (DP9).

[14] Hughes Hallett, A.J. and P.G. Fisher. 1990, “On eco-

nomic structures and model solution methods”, Oxford

Bulletin of Economics and Statrstics, Aug., Revised version

of (DPlO}.

[15] Wallis, K.F., 1988, “Some recent developments in macro-

econometric modelling in the U.K.“, Australian Economic

Papers, 27, 7-25 (DPll).

[16] Turner, D.S., K.F. Wallis and J.D. Whitley, 1989, “Using

macroeconometric models to evaluate policy proposals”,

in: A. Britton, ed. (1989) 103-105 (DP12).

[17] Fisher, P.G. and A. Hughes Hallett, 1988, “Iterative tech-

niques for solving simultaneous equations systems: A view

from the economics literature”, Journal of Computational

and Applied Mathematics, 24, 241-255 (DP13).

[18] Macdonald, G. and D.S. Turner, 1989, “A Ready Reck-

oner package for macroeconomics teaching”, Oxford Bul-

letin of Economics and Statistics, 51 (DP14).

[19] Turner, D.S., K.F. Wallis and J.D. Whitley, 1989, “Dif-

ferences in the properties of large-scale macroeconometric

models: The role of labour market specifications”, Jour-

nal of Applied Econometrics, 4, 317-344 (DP15).

[20] Turner, D.S., 1988, “Does the UK face a balance of

payments constraint on growth? - a quantitative analysis

using the LBS and NIESR models”, Sept. (DP16).

[21] Fisher, P.G. and K.F. Wallis. 1990, “The historical track-

ing performance of UK Macroeconometric Models 19788

85”, Economic Modelling, 7, no. 2, 179-197 (DP17).

[22] Turner, D.S. and J.D. Whitley, 1989, “The importance of

the distinction between long- and short-term unemploy-

ment in UK macroeconomic models”, Feb. {DP18).

[23] Turner, D.S., “The role of judgement in macroeconomic

forecasting”, Journal of Forecasting, forthcoming { DP19).

[24] Fisher, P.G., S.K. Tanna, D.S. Turner, K.F. Wallis and

J.D. Whitley, 1989, “Econometric evaluation of the ex-

change rate in models of the UK economy”, July (DP20),

revised version in Economic Journal. forthcoming.

[25] Deleau, M., C. le Van and P. Malgrange, 1989, “The long

run of macroeconometric models”, July (DP21).

[26] Wallis, K.F., 1989, “Macroeconomic forecasting: a

survey”, Economic Journal, 99, March, 28-61.

[27] Wallis, K.F., 1988, “Empirical models and macroeco-

nomic policy analysis”, in: Bryant et al. (1988).

[28] Fisher, P.G., SK. Tanna, D.S. Turner, K.F. Wallis, and

J.D. Whitley, 1988, “Comparative properties of models of

the UK economy”, Natlonal Instrtute Economic Reuiew,

125, 69-87.

[29] Fisher, P.G., SK. Tanna, D.S. Turner, K.F. Wallis and

J.D. Whitley, 1989, “Comparative properties of models of

the UK economy”, National Institute Economic Review,

129, 69-87.

[30] The role of models in macroeconomic research, 1985,

Paper prepared for a conference organised by the ESRC

Economic Affairs Committee, July.

[31] Whitley. J.D., 1988, “Manufacturing and services in UK

macromodels”. in: T.S. Barker and J.P. Dunne. eds., The

British Economy After 011 (Croom Helm, London) 39-62.

[32] Fisher, P.G. and A.J. Hughes Hallett. 1988, “The conver-

gence characteristics of iterative techniques for solving

econometric models”, Oxford Bulletin of Economics and

Stairstics. 49, 231-244.

[33] Turner, D.S., K.F. Wallis and J.D. Whitley, 1987.

“Evaluating special employment measures with macro-

econometric models”, Oxford Review of Economic Policy,

3, xx-xxxvi.

[34] Wallis, K.F. and J.D. Whitley. 1990, “Sources of error in

forecasts and expectations: UK economic models. 1984-

88” { DP22).

[35] Fisher. P.G. and D.S. Turner, 1990. “The exchange rate,

forward expectations and the properties of macroecono-

metric models” { DP23).

[36] Fisher, P.G. and A.J. Hughes Hallett, 1988, “Efficient

solution techniques for linear and non-linear rational ex-

pectations models”, Journal of Economic Dynamics rend

Control, 12, 6355657.

Other publications

Artis, M.J., 1982. “Why do forecasts differ?“. Paper presented

to the Panel of Academic Consultants, no. 17 (Bank of

England, London).

Ball, James and Sean Holly, 1989. “Macro-econometric model

building in the United Kingdom”, in: R.G. Bodkin, L.R.

Klein and K. Marwah, eds., A History of Macro-Economet-

ric Modelling, forthcoming.

Britton. Andrew, 1988, Review of “Models of the UK Econ-

omy, Vols. 1. 2, 3”, Economica, 55, 130-131.

Britton, Andrew, ed., 1989, Policymaking with Macroeconomic

Models (Avebury, Aldershot).

Bryant, Ralph C.. Dale W. Henderson, Gerald Holtham, Peter

Hooper and Steven A. Symansky, eds., Empiucal Macro-

economics for Interdependent Economies (The Brookings

Institution, Washington, DC).

Dawson, A, 1989. “Macroeconomics teaching computer

packages: A review”, Economic Journal, 99, 1275-1283.

Deaton, A., 1981, “On the usefulness of macroeconomic mod-

els”. Paper prepared for the Bank of England Panel of

Academic Consultants, March (Bank of England, London).

Gregory, R.G. 1988, “Some recent developments in macro-

econometric modelling in the UK: Comment”, Australian

Economic Papers, 27, 26-28.

Holden. Kenneth, 1989, Review of Britton (1989) Economic

Journal, 99, 861-863.

Holly, Sean, 1989, Review of Bryant et al. (1988) Economic

Journal, 99, 858-860.

Page 9: The Warwick ESRC macroeconomic modelling bureau: An assessment

R. Smith / The Warwick ESRC Macroeconomic Modellrng Bureau 309

Kenway, P., 1989, UK macroeconometric models: A survey,

Discussion Paper in Economics, no. 221 (University of

Reading, Reading).

Mankiw, N.G., 1988, Recent developments in macroeco-

nomics: A very quick refresher course, Discussion paper

no. 1366 (Harvard Institute of Economic Research, Cam- bridge, MA).

McNees, Stephen K., 1982, “The role of macroeconometric

models in forecasting and policy analysis in the United

States”, Journal of Forecasting, 1, 37-48.

Ormerod, P., ed., 1979, Economrc Modelling (Heinemann,

London).

Parke, W.R., 1987, “Macroeconometric model comparison and evaluation techniques: A practical appraisal”, Journal of

Applied Econometrics, 2, 133-144.

Posner, M., ed., 1978, Demand Management (Heinemann,

London). SSRC, 1981, Report by the Sub-Committee on Macrc-Eco-

nomic Research in the UK.

Biography: Ron SMITH is Professor of Applied Economics at Birkbeck College and has written, taught and consulted exten- sively on macro-models.


Recommended