Complexity and economics

Complexity and economic policy

by Alan Kirman, École des hautes études en sciences sociales, Paris, and Aix-Marseille University

Over the last two centuries there has been a growing acceptance of social and political liberalism as the desirable basis for societal organisation. Economic theory has tried to accommodate itself to that position and has developed increasingly sophisticated models to justify the contention that individuals left to their own devices will self-organise into a socially desirable state. However, in so doing, it has led us to a view of the economic system that is at odds with what has been happening in many other disciplines.

Although in fields such as statistical physics, ecology and social psychology it is now widely accepted that systems of interacting individuals will not have the sort of behaviour that corresponds to that of one average or typical particle or individual, this has not had much effect on economics. Whilst those disciplines moved on to study the emergence of non-linear dynamics as a result of the complex interaction between individuals, economists relentlessly insisted on basing their analysis on that of rational optimising individuals behaving as if they were acting in isolation. Indeed, this is the basic paradigm on which modern economic theory and our standard economic models are based. It dates from Adam Smith’s (1776) notion of the Invisible Hand which suggested that when individuals are left, insofar as possible, to their own devices, the economy will self-organise into a state which has satisfactory welfare properties.

Yet this paradigm is neither validated by empirical evidence nor does it have sound theoretical foundations. It has become an assumption. It has been the cornerstone of economic theory although the persistent arrival of major economic crises would seem to suggest that there are real problems with the analysis. Experience suggests that amnesia is prevalent among economists and that, while each crisis provokes demands for new approaches to economics, (witness the birth of George Soros’ Institute for New Economic Thinking), in the end inertia prevails and economics returns to the path that it was already following.

There has been a remarkable tendency to use a period of relative calm to declare victory over the enemy. Recall the declaration of Robert Lucas, Nobel Prize winner and President of the American Economic Association in his Presidential Address in 2003 in which he said: “The central problem of depression-prevention has been solved.”

Both economists and policy makers had been lulled into a sense of false security during this brief period of calm.

Then came 2008 and, as always in times of crisis, voices were raised, mainly by commentators and policy makers enquiring as to why economists had anticipated neither the onset nor the severity of the crisis.

When Her Majesty the Queen asked economists at the London School of Economics what had gone wrong, she received the following reply: “In summary your majesty, the failure to foresee the timing, extent and severity of the crisis … was principally the failure of the collective imagination of many bright people … to understand the risks to the system as a whole.”

As soon as one considers the economy as a complex adaptive system in which the aggregate behaviour emerges from the interaction between its components, no simple relation between the individual participant and the aggregate can be established. Because of all the interactions and the complicated feedbacks between the actions of the individuals and the behaviour of the system there will inevitably be “unforeseen consequences” of the actions taken by individuals, firms and governments. Not only the individuals themselves but the network that links them changes over time. The evolution of such systems is intrinsically difficult to predict, and for policy makers this means that assertions such as “this measure will cause that outcome” have to be replaced with “a number of outcomes are possible and our best estimates of the probabilities of those outcomes at the current point are…”

Consider the case of the possible impact of Brexit on the British economy and the global economy. Revised forecasts of the growth of these economies are now being issued, but when so much depends on the conditions under which the exit is achieved, is it reasonable to make such deterministic forecasts? Given the complexity and interlocking nature of the economies, the political factors that will influence the nature of the separation and the perception and anticipation of the participants (from individuals to governments) of the consequences, how much confidence can we put in point estimates of growth over the next few years?

While some might take the complex systems approach as an admission of our incapacity to control or even influence economic outcomes, this need not be the case. Hayek once argued that there are no economic “laws” just “patterns”. The development of Big Data and the techniques for its analysis may provide us with the tools to recognise such patterns and to react to them. But these patterns arise from the interaction of individuals who are in many ways simpler than homo economicus, and it is the interaction between these relatively simple individuals who react to what is going on, rather than optimise in isolation that produces the major upheavals that characterise our systems.

Finally, in trying to stabilise such systems it is an error to focus on one variable either to control the system or to inform us about its evolution. Single variables such as the interest rate do not permit sufficient flexibility for policy actions and single performance measures such as the unemployment rate or GDP convey too little information about the state of the economy.

Useful links

The original article on OECD Insights, including links and supplementary material, can be found here: http://wp.me/p2v6oD-2B4

The full series can be found here: http://oecdinsights.org/?s=NAEC+complexity

A pragmatic holist: Herbert Simon, economics and The Architecture of Complexity

by Vela Velupillai, Madras School of Economics

“Herb had it all put together at least 40 years ago – and I’ve known him only for 35.” Alan Newell, 1989.

And so it was, with Hierarchy in 1950, Near-Decomposability from about 1949, and Causality, underpinning the reasonably rapid evolution of dynamical systems into a series of stable complex structures. Almost all of these pioneering articles are reprinted in Simon’s 1977 collection and, moreover, the hierarchy and near-decomposability classics appear in section 4 with the heading “Complexity”. The cybernetic vision became the fully-fledged digital computer basis of boundedly rational human problem solvers implementing heuristic search procedures to prove, for example, axiomatic mathematical theorems (in the monumental Principia Mathematica of Russell and Whitehead) substantiating Alan Newell’s entirely reasonable claim quoted above.

In defining the notion of complexity in The Architecture of Complexity (AoC), Simon eschews formalisms and relies on a rough, working, concept of complex systems that would help identify examples of observable structures – predominantly in the behavioural sciences – that could lead to theories and, hence, theorems, of evolving dynamical systems that exhibit properties that are amenable to design and prediction with the help of hierarchy, near-decomposability and causality. Thus, the almost informal definition is (italics added): “Roughly, by a complex system I mean one made up of a large number of parts that interact in a nonsimple way. In such systems, the whole is more than the sum of the parts … in the … pragmatic sense that, given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole. In the face of complexity, an in-principle reductionist may be at the same time a pragmatic holist.”

Simon was always a pragmatic holist, even while attempting the reduction of the behaviour of complex entities to parsimonious processes that would exhibit the properties of “wholes”, based on nonsimply interacting “parts”, that may themselves be simple. He summarised the way this approach could apply to economics in a letter to Professor Axel Leijonhufvud and me after reading my book Computable Economics. (You can see the letter here.) Simon argued that:

“Finally, we get to the empirical boundary … of the level of complexity that humans actually can handle, with and without their computers, and – perhaps more important – what they actually do to solve problems that lie beyond this strict boundary even though they are within some of the broader limits.

The latter is an important point for economics, because we humans spend most of our lives making decisions that are far beyond any of the levels of complexity we can handle exactly; and this is where satisficing, floating aspiration levels, recognition and heuristic search, and similar devices for arriving at good-enough decisions take over. [The term ‘satisfice’, which appears in the Oxford English Dictionary as a Northumbrian synonym for ‘satisfy’, was borrowed by Simon (1956) in ‘Rational Choice and the Structure of the Environment’ to describe a strategy for reaching a decision the decider finds adequate, even if it’s not optimal in theory.] A parsimonious economic theory, and an empirically verifiable one, shows how human beings, using very simple procedures, reach decisions that lie far beyond their capacity for finding exact solutions by the usual maximizing criteria.”

In many ways, AoC summarised Simon’s evolving (sic!) visions of a quantitative behavioural science, which provided the foundations of administering complex, hierarchically structured, causal organisations, by boundedly rational agents implanting – with the help of digital computers – procedures that were, in turn, reflections of human problem solving processes. But it also presaged the increasing precision of predictable reality – not amounting to non-pragmatic, non-empirical phenomena – requiring an operational description of complex systems that were the observable in nature, resulting from the evolutionary dynamics of hierarchical structures. Thus, the final – fourth – section of AoC “examines the relation between complex systems and their descriptions” – for which Simon returned to Solomonoff’s pioneering definition of algorithmic information theory.

AoC was equally expository on the many issues with which we have come to associate Simon’s boundedly rational agents (and Institutions) satisficing – instead of optimising, again for pragmatic, historically observable, realistic reasons – using heuristic search processes in Human Problem Solving contexts of behavioural decisions. The famous distinction between substantive and procedural rationality arose from the dichotomy of a state vs process description of a world “as sensed and … as acted upon”.

Essentially AoC is suffused with pragmatic definitions and human procedures of realistic implementations, even in the utilising of digital computers. Computability theory assumes the Church-Turing Thesis in defining algorithms. The notion of computational complexity is predicated upon the assumption of the validity of the Church-Turing Thesis. Simon’s algorithms for human problem solvers are heuristic search processes, where no such assumption is made. Hence the feeling that engulfed him in his later years is not surprising (italics added):

“The field of computer science has been much occupied with questions of computational complexity, the obverse of computational simplicity. But in the literature of the field, ‘complexity’ usually means something quite different from my meaning of it in the present context. Largely for reasons of mathematical attainability, and at the expense of relevance, theorems of computational complexity have mainly addressed worst-case behaviour of computational algorithms as the size of the data set grows larger. In the limit, they have even focused on computability in the sense of Gödel, and Turing and the halting problem. I must confess that these concerns produce in me a great feeling of ennui.”

Useful links

The original article on OECD Insights, including links and supplementary material, can be found here: http://wp.me/p2v6oD-2Lg

The full series can be found here: http://oecdinsights.org/?s=NAEC+complexity

From economic crisis to crisis in economics

by Andy Haldane, Chief Economist and Executive Director, Monetary Analysis & Statistics, Bank of England

It would be easy to become very depressed at the state of economics in the current environment. Many experts, including economics experts, are simply being ignored. But the economic challenges facing us could not be greater: slowing growth, slowing productivity, the retreat of trade, the retreat of globalisation, high and rising levels of inequality. These are deep and diverse problems facing our societies and we will need deep and diverse frameworks to help understand them and to set policy in response to them. In the pre-crisis environment when things were relatively stable and stationary, our existing frameworks in macroeconomics did a pretty good job of making sense of things.

But the world these days is characterised by features such as discontinuities, tipping points, multiple equilibria, and radical uncertainty. So if we are to make economics interesting and the response to the challenges adequate, we need new frameworks that can capture the complexities of modern societies.

We are seeing increased interest in using complexity theory to make sense of the dynamics of economic and financial systems. For example, epidemiological models have been used to understand and calibrate regulatory capital standards for the largest, most interconnected banks, the so-called “super-spreaders”. Less attention has been placed on using complexity theory to understand the overall architecture of public policy – how the various pieces of the policy jigsaw fit together as a whole in relation to modern economic and financial systems. These systems can be characterised as a complex, adaptive “system of systems”, a nested set of sub-systems, each one itself a complex web. The architecture of a complex system of systems means that policies with varying degrees of magnification are necessary to understand and to moderate fluctuations. It also means that taking account of interactions between these layers is important when gauging risk.

Although there is no generally-accepted definition of complexity, that proposed by Herbert Simon in The Architecture of Complexity – “one made up of a large number of parts that interact in a non-simple way” – captures well its everyday essence. The whole behaves very differently than the sum of its parts. The properties of complex systems typically give rise to irregular, and often highly non-normal, statistical distributions for these systems over time. This manifests itself as much fatter tails than a normal distribution would suggest. In other words, system-wide interactions and feedbacks generate a much higher probability of catastrophic events than Gaussian distributions would imply.

For evolutionary reasons of survival of the fittest, Simon posited that “decomposable” networks were more resilient and hence more likely to proliferate. By decomposable networks, he meant organisational structures which could be partitioned such that the resilience of the system as a whole was not reliant on any one sub-element. This may be a reasonable long-run description of some real-world complex systems, but less suitable as a description of the evolution of socio-economic systems. The efficiency of many of today’s networks relies on their hyper-connectivity. There are, in the language of economics, significantly increasing returns to scale and scope in a network industry. Think of the benefits of global supply chains and global interbank networks for trade and financial risk-sharing. This provides a powerful secular incentive for non-decomposable socio-economic systems.

Moreover, if these hyper-connected networks do face systemic threat, they are often able to adapt in ways which avoid extinction. For example, the risk of social, economic or financial disorder will typically lead to an adaptation of policies to prevent systemic collapse. These adaptive policy responses may preserve otherwise-fragile socio-economic topologies. They may even further encourage the growth of connectivity and complexity of these networks. Policies to support “super-spreader” banks in a crisis for instance may encourage them to become larger and more complex. The combination of network economies and policy responses to failure means socio-economic systems may be less Darwinian, and hence decomposable, than natural and biological systems.

What public policy implications follow from this complex system of systems perspective? First, it underscores the importance of accurate data and timely mapping of each layer in the system. This is especially important when these layers are themselves complex. Granular data is needed to capture the interactions within and between these complex sub-systems.

Second, modelling of each of these layers, and their interaction with other layers, is likely to be important, both for understanding system risks and dynamics and for calibrating potential policy responses to them.

Third, in controlling these risks, something akin to the Tinbergen Rule is likely to apply: there is likely to be a need for at least as many policy instruments as there are complex sub-components of a system of systems if risk is to be monitored and managed effectively. Put differently, an under-identified complex system of systems is likely to result in a loss of control, both system-wide and for each of the layers.

In the meantime, there is a crisis in economics. For some, it is a threat. For others it is an opportunity to make a great leap forward, as Keynes did in the 1930s. But seizing this opportunity requires first a re-examination of the contours of economics and an exploration of some new pathways. Second, it is important to look at economic systems through a cross-disciplinary lens. Drawing on insights from a range of disciplines, natural as well as social sciences, can provide a different perspective on individual behaviour and system-wide dynamics.

The New Approaches to Economic Challenges (NAEC) initiative does so, and the OECD’s willingness to consider a complexity approach puts the Organisation at the forefront of bringing economic analysis policy making into the 21st century.

This article draws on contributions to the OECD NAEC Roundtable on 14 December 2016; The GLS Shackle Biennial Memorial Lecture on 10 November 2016; and “On microscopes and telescopes”, at the Lorentz centre, Leiden, workshop on socio-economic complexity on 27 March 2015.

Useful links

The original article on OECD Insights, including links and supplementary material, can be found here: http://wp.me/p2v6oD-2M4

The full series can be found here: http://oecdinsights.org/?s=NAEC+complexity

Complexity theory and evolutionary economics

by Robert D. Atkinson, President, Information Technology and Innovation Foundation

If there was any possible upside from the destruction stemming from the financial crisis and Great Recession it was that neoclassical economics’ intellectual hegemony began to be more seriously questioned. As such, the rising interest in complexity theory is a welcome development. Indeed, approaching economic policy from a complexity perspective promises significant improvements. However, this will only be the case if we avoid a Hayekian passivity grounded in the view that action is too risky given just how complex economic systems are. This would be a significant mistake for the risk of non-action in complex systems is often higher than the risk of action, especially if the latter is informed by a rigorous thinking grounded in robust argumentation.

The flaws of neoclassical economics have long been pointed out, including its belief of the “economy as machine”, where, if policy makers pull a lever they will get an expected result. However, despite what Larry Summers has written, economics is not a science that applies for all times and places. It is a doctrine and as economies evolve so too should doctrines. After the Second World War, when the United States was shifting from what Michael Lind calls the second republic (the post-Civil War governance system) to the third republic (the post-New-Deal, Great Society governance structure), there was an intense intellectual debate about the economic policy path America should take. In Keynes-Hayek: The Clash That Defined Modern Economics, Nicholas Wapshott described this debate between Keynes (a proponent of the third republic), who articulated the need for a larger and more interventionist state, and Hayek (a defender of the second republic), who worried about state over-reach. Today, we are in need of a similar great debate about the future of economic policy for the emerging “fourth republic.”

If we are to develop such an economic doctrine to guide the current socio-technical economic system, then complexity will need to play a foundational role. But a risk of going down the complexity path is that proponents may substitute one ideology for another. If today’s policy makers believe that economic systems are relatively simple and that policies generate only first-order effects, policy makers who have embraced complexity may believe that second, third, and fourth order effects are rampant. In other words, the butterfly in Mexico can set off a tornado in Texas. If things are this complex, we are better off following Hayek’s advice to intervene as little as possible. At least with a mechanist view, policy makers felt they could do something and perhaps they got it right. Hayekian complexity risks leading to inaction.

This gets to a second challenge, “group think.” Many advocates of complexity point to complex financial tools (such as collateralised debt obligations, CDOs) as the cause of the financial crisis. Regulators simply didn’t have any insight because of the complexity of the instruments. But these tools were symptoms. At the heart of crisis, at least in the United States, was mortgage origination fraud. The even more serious problem was intellectual: virtually all neoclassical economists subscribed to the theory that in an efficient market, all the information that would allow an investor to predict the next price move is already reflected in the current price. If housing prices increase 80% in just a few years, then their actual worth increased 80%. So any reset of economics has to be based not just on replacing many of the basic tenants of neoclassical economics, it has to be based on replacing a troubling tendency toward group think. Yet, replacing the former may indeed be harder than the latter.

So where should we go with complexity? I believe that a core component of complexity is and should be evolution. In an evolutionary view, an economy is an “organism” that is constantly developing new industries, technologies, organisations, occupations, and capabilities while at the same time shedding older ones that new technologies and other evolutionary changes make redundant. This rate of evolutionary change differs over time and space, depending on a variety of factors, including technological advancement, entrepreneurial effort, domestic policies, and the international competitive environment. To the extent that neoclassical models consider change, it is seen as growth more than evolution. In other words, market transactions maximise static efficiency and consumer welfare. As Alan Blinder writes, “Can economic activities be rearranged so that some people are made better off, but no one is made worse off? If so we have uncovered an inefficiency. If not, the system is efficient.”

In complexity or evolutionary economics, we should be focusing not on static allocative efficiency, but on adaptive efficiency. Douglass North argues that: “Adaptive efficiency … is concerned with the kinds of rules that shape the way an economy evolves through time. It is also concerned with the willingness of a society to acquire knowledge and learning, to induce innovation, to undertake risk and creative activity of all sorts, as well as to resolve problems and bottlenecks of the society through time.” Likewise, Richard Nelson and Sidney G. Winter wrote in their 1982 book An Evolutionary Theory of Economic Change, “The broader connotations of ‘evolutionary’ include a concern with processes of long-term and progressive change.”

This provides a valuable direction. It means that a key focus for economic policy should be to encourage adaptation, experimentation and risk taking. It means supporting policies to intentionally accelerate economic evolution, especially from technological and institutional innovation. This means not only rejecting neo-Ludditism in favour of techno-optimism, it means the embrace of a proactive innovation policy. And it means enabling new experiments in policy, recognising that many will fail, but that some will succeed and become “dominant species.” Policy and programme experimentation will better enable economic policy to support complex adaptive systems.

Useful links

The original article on OECD Insights, including links and supplementary material, can be found here: http://wp.me/p2v6oD-2Df

The full series can be found here: http://oecdinsights.org/?s=NAEC+complexity

Complexity, modesty and economic policy

by Lex Hoogduin, University of Groningen and GloComNet

Societies and economies are complex systems, but the theories used to inform economic policies predominantly neglect complexity. They assume for example representative agents such as typical consumers, and they also assume that the future is risky rather than uncertain. This assumption allows for the application of the probability calculus and a whole series of other techniques based on it.

In risk situations, all potential outcomes of a policy can be known. This is not the case in situations of uncertainty, but human beings, policy makers included, cannot escape having to take their decisions and having to act facing an uncertain future. The argument is one of logic. Human beings cannot know now what will be discovered in the future. Future discoveries may however impact and shape the consequences of their current decisions and actions. Therefore, they are unable to come up with an exhaustive list of potential outcomes of a policy decision or action.

Properly taking into account the complexity of the economy and the uncertainty of the future implies a paradigm shift in economics. That paradigm does not need to be developed from scratch. It builds on modern complexity science, neo-Austrian economics (in particular Hayek and von Mises), as well as the work of Keynes and Knight and certain strands of cognitive psychology (for example, Kahneman 2011). There is no room here to elaborate on the theory and the claim that it entails a paradigm shift. Rather, I will discuss the implications for economic policy that follow from this paradigm.

This starts with the recognition that the future cannot be predicted in detail. We should be modest about what can be achieved with economic policy. This is the “modesty principle”. Economic policy cannot deliver specific targets for economic growth, income distribution, inflation, the increase of the average temperature in four decades from now, etc. Economic policy makers would be wise to stop pretending that they can deliver what they cannot. This insight implies that many current policies should be discontinued. To mention just one example: inflation targeting by central banks does not pass the modesty test.

This principle also implies refraining from detailed economic forecasts as a basis for policy making and execution. Policies should not be made on the assumption that we know the value of certain variables which we cannot know. An example here is the income multiplier in relation to changes in fiscal policy. The modesty principle also flashes red for risk-based regulation and supervision.

What economic policy can do is contribute to the formation and evolution of a fit economic order, and avoid doing harm to such an order, what I would call the “do no harm principle”, and be as little as possible a source of uncertainty for private economic agents.

Order is a central concept in the alternative paradigm, replacing the (dis)equilibrium concept in mainstream economics. An order is the set of possible general outcomes (patterns, like growth, inflation, cyclicality, etc.) emerging from purposefully acting and interacting individuals on the basis of a set of rules in a wide sense (laws, ethics, conventions…), together called a regime. Economics can analyse the connection between changes in regime and changes in economic order. Economic policy can influence the economic order through changing the regime.

However, this knowledge is not certain. There is always the potential for surprises (positive and negative; opportunities and threats) and unintended consequences. Policy can therefore not be designed first and then just be executed as designed. Policy making and execution have to evolve in a process of constant monitoring and adaptation. This would also allow for evolutionary change. An economic order that is not allowed to evolve may lose its fitness and may suddenly collapse or enter a crisis (as described by Scheffer for critical transitions in society). This mechanism may have played a role in the Great Moderation leading up to the financial crisis of 2007/2008 and in the crisis of fully funded pension systems. It is also a warning against basing sustainability policies on precise temperature targets decades in the unknowable future.

Fitness of an order has five dimensions. The first is an order in which agents are acting as described in the previous paragraph – policy making involves a process of constant monitoring and adaptation. In addition to that, fitness is determined by alertness of agents (the ability to detect mistakes and opportunities); their resilience (the ability to survive and recover from mistakes and negative surprises); adaptive capacity (the ability to adjust); and creative capacity (the ability to imagine and shape the future). Policies may be directed at facilitating economic agents to improve these capacities, although constrained by the “modesty” and “do no harm” principles. Note that the concept of stability does not appear in the definition of fitness. This marks a difference with current policies which put much emphasis on stability.

In its own actions the government should be transparent and predictable. The best way to do that seems to be to follow simple rules. For example in fiscal policy, balance the budget, perhaps with clearly-defined, limited room for automatic stabilisers to work.

This alternative paradigm places highlights on some methods and analytical techniques, including narrative techniquesnetwork analysisevolutionary logicqualitative scenario thinking, non-linear dynamics (Scheffer), historical analysis (development of complex systems is path dependent) and (reverse) stress testing.

Economic policies developed along these lines help people to live their lives as they wish. They are good policies for good lives.

Useful links

The original article on OECD Insights, including links and supplementary material, can be found here: http://wp.me/p2v6oD-2CF

The full series can be found here: http://oecdinsights.org/?s=NAEC+complexity

The rising complexity of the global economy

by Sony Kapoor, Managing Director, Re-Define International Think Tank and CEO of Court Jesters Consulting

A complicated system (such as a car) can be disassembled and understood as the sum of its parts. In contrast, a complex system (such as traffic) exhibits emergent characteristics that arise out of the interaction between its constituent parts. Applying complexity theory to economic policy making requires this important recognition – that the economy is not a complicated system, but a complex one.

Historically, economic models and related policy making have treated the economy as a complicated system where simplified and stylised models, often applied to a closed economy, a specific sector or looking only at particular channels of interaction such as interest rates, seek to first simplify the real economy, then understand it and finally generalise in order to make policy.

This approach is increasingly out-dated and will produce results that simply fail to capture the rising complexity of the modern economy. Any policy decisions based on this notion of a complicated system that is the sum of its parts can be dangerously inaccurate and inappropriate. What are the forces driving this increasing complexity in the global economy? What, if anything, can be done about this?

A complex system can be roughly understood as a network of nodes, where the nodes themselves are interconnected to various degrees through single or multiple channels. This means that whatever happens in one node is transmitted through the network and is likely to impact other nodes to various degrees. The behaviour of the system as a whole thus depends on the nodes, as well as the nature of the inter-linkages between them. The complexity of the system, in this instance the global economy, is influenced by a number of factors. These include first, the number of nodes; second, the number of inter-linkages; third, the nature of interlinkages; and fourth, the speed at which a stimulus or shock propagates to other nodes. Let us now apply each of these factors to the global economy.

The global economy has seen a rapid increase in the number of nodes. One way of understanding this is to look at countries that are active participants in the global economy. The growth of China and other emerging markets, as well as their increasing integration into the world trading and more recently global financial systems, is a good proxy to track the rise in the number of nodes. The relative size and importance of these nodes has also risen with the People’s Republic of China, by some measures already the world’s largest economy.

Simultaneously, the number of interlinkages between nodes has risen even more rapidly. The number of possible connections between nodes increases non-linearly with the increase in the number of nodes, so the global economy now has a greater number of financial, economic, trade, information, policy, institutional, technology, military, travel and human links between nodes than ever before. The increasing complexity of supply chains in trade and manufacturing, ever greater outsourcing of services, rising military collaborations, the global nature of new technological advances, increasing migration and travel, as well the rise of the internet and telecommunications traffic across the world have all greatly increased the number of connections across the nodes.

It is not just that the number of interconnections between nodes has risen almost exponentially. The scope and nature of these interlinkages has broadened significantly. The most notable broadening has come in the form of the rapid rise of complex manufacturing supply chains; financial links that result directly from the gradual dismantling of capital controls; and the rise of cross-border communication and spread of information through the internet. These ever-broadening connections between different nodes fundamentally change the behaviour of the system and how the global economy will react to any stimulus, change or shock in one or more of nodes in ways that become ever harder to model or predict.

Last but not the least, it is not just the number and intensity of links between the nodes that has risen, but also how quickly information, technology, knowledge, shocks, finance or pathogens move between the nodes. This results in complexity theory parlance, in an ever more tightly coupled global economy. Such systems are more efficient, and the quest for efficiency has given rise to just-in-time supply chains and the rising speed of financial trading and other developments. But this efficiency comes at the cost of rising fragility. Evidence that financial, economic, pathogenic, security and other shocks are spreading more rapidly through the world is mounting.

To sum up, the Dynamic Stochastic General Equilibrium (DSGE) models and other traditional approaches to modelling the global economy are increasingly inadequate and inaccurate in capturing the rising complexity of the global economy. This complexity is being driven both by the rising number of nodes (countries) now integrated into the global economy, as well as the number and nature of the interconnections between these, which are intensifying at an even faster pace.

This calls for a new approach to policy making that incorporates lessons from complexity theory by using a system-wide approach to modelling, changes institutional design to reduce the fragility of the system and deepens international and cross-sector policy making and policy co-ordination.

Useful links

The original article on OECD Insights, including links and supplementary material, can be found here: http://wp.me/p2v6oD-2AY

The full series can be found here: http://oecdinsights.org/?s=NAEC+complexity

Economic complexity, institutions and income inequality

by César Hidalgo and Dominik HartmannMacro Connections, The MIT Media Lab

Is a country’s ability to generate and distribute income determined by its productive structure? Decades ago Simon Kuznets proposed an inverted-u-shaped relationship describing the connection between a country’s average level of income and its level of income inequality. Kuznets’ curve suggested that income inequality would first rise and then fall as countries’ income moved from low to high. Yet, the curve has proven difficult to verify empirically. The inverted-u-shaped relationship fails to hold when several Latin American countries are removed from the sample, and in recent decades, the upward side of the Kuznets curve has vanished as inequality in many low-income countries has increased. Moreover, several East-Asian economies have grown from low to middle incomes while reducing income inequality.

Together, these findings undermine the empirical robustness of Kuznets’ curve, and indicate that GDP per capita is a measure of economic development that is insufficient to explain variations in income inequality. This agrees with recent work arguing that inequality depends not only on a country’s rate or stage of growth, but also on its type of growth and institutions. Hence, we should expect that more nuanced measures of economic development, such as those focused on the types of products a country exports, should provide information on the connection between economic development and inequality that transcends the limitations of aggregate output measures such as GDP.

Scholars have argued that income inequality depends on a variety of factors, from an economy’s factor endowments, geography, and institutions, to its historical trajectories, changes in technology, and returns to capital. The combination of these factors should be expressed in the mix of products that a country makes. For example, colonial economies that specialised in a narrow set of agricultural or mineral products tend to have more unequal distributions of political power, human capital, and wealth. Conversely, sophisticated products, like medical imaging devices or electronic components, are typically produced in diversified economies that require more inclusive institutions. Complex industries and complex economies thrive when workers are able to contribute their creative input to the activities of firms.

This suggests a model of heterogeneous industries in which firms survive only when they are able to adopt or discover the institutions and human capital that work best in that industry. According to this model, the composition of products that a country exports should tell us about a country’s institutions and about the quality of its human capital. This model would also suggest that a country’s mix of products should provide information that explains inequality and that might escape aggregate measures of development such as GDP, average years of schooling, or survey-based measures of formal and informal institutions.

With our colleagues from the MIT Media Lab, we used the Economic Complexity Index (ECI) to capture information about an economy’s level of development which is different from that captured in measures of income. Economic complexity is a measure of the knowledge in a society that gets translated into the products it makes. The most complex products are sophisticated chemicals and machinery, whereas the least complex products are raw materials or simple agricultural products. The economic complexity of a country depends on the complexity of the products it exports. A country is considered complex if it exports not only highly complex products but also a large number of different products. To calculate the economic complexity of a country, we measure the average ubiquity of the products it exports, then the average diversity of the countries that make those products, and so forth.

For example, in 2012, Chile’s average income per capita and years of schooling (USD 21 044 at PPP in current 2012 USD and 9.8 mean years of schooling) were comparable to Malaysia’s income per capita and schooling (USD 22 314 and 9.5), even though Malaysia ranked 24th in the ECI ranking while Chile ranked 72nd. The rankings reflect differences in these countries’ export structure: Chile largely exports natural resources, while Malaysia exports a diverse range of electronics and machinery. Moreover, these differences in the ECI ranking also point more accurately to differences in these countries’ level of income inequality. Chile’s inequality as measured through the Gini coefficient (0.49) is significantly higher than that of Malaysia (0.39).

We separated the correlation between economic complexity and income inequality from the correlation between income inequality and average income, population, human capital (measured by average years of schooling), export concentration, and formal institutions. Our results document a strong and robust correlation between the economic complexity index and income inequality. This relationship is robust even after controlling for measures of income, education, and institutions, and the relationship has remained strong over the last fifty years. Results also show that increases in economic complexity tend to be accompanied by decreases in income inequality.

Our findings do not mean that productive structures solely determine a country’s level of income inequality. On the contrary, a more likely explanation is that productive structures represent a high-resolution expression of a number of factors, from institutions to education, that co-evolve with the mix of products that a country exports and with the inclusiveness of its economy. Still, because of this co-evolution, our findings emphasize that productive structures are not only associated with income and economic growth, but also with how income is distributed.

We advance methods that enable a more fine-grained perspective on the relationship between productive structures and income inequality. The method is based on introducing the Product Gini Index, or PGI, which estimates the expected level of inequality for the countries exporting a given product. Overlaying PGI values on the network of related products allows us to create maps that can be used to anticipate how changes in a country’s productive structure will affect its level of income inequality. These maps provide means for researchers and policy makers to explore and compare the complex co-evolution of productive structures, institutions and income inequality for hundreds of economies.

Useful links

The original article on OECD Insights, including links and supplementary material, can be found here: http://wp.me/p2v6oD-2CN

The full series can be found here: http://oecdinsights.org/?s=NAEC+complexity

Crowds, consensus and complexity in economic forecasting

by Brian Dowd, FocusEconomics

Predicting the future behaviour of anything, much less something as complex and enormous as an entire economy, is not an easy task. Accurate forecasts, therefore, are often in short supply. Economies are complex systems in perpetual motion, and extrapolating behaviours and relationships from past economic cycles into the next one is tremendously complicated. Moreover, and perhaps surprisingly, forecasting is difficult due to the vast amount of raw economic data available. In an ideal world, economic forecasts would consider all of the information available. In the real world, however, that is nearly impossible, as information is scattered in myriad news articles, government communications, and so on, as well as the mountain of raw data.

Although some might consider having all of that information an advantage, nothing could be further from the truth. The thousands of indicators and data available tend to produce a vast amount of statistical noise, making the establishment of meaningful relations of causation between variables a serious challenge. And, of course, we cannot forget the inherent uncertainty in forecasting, something that forecasters must take into account and which creates even more noise to deal with.

The question then becomes, is there a way to cancel out all of that noise to get a more accurate forecast? This is where “the wisdom of the crowds” comes in. Sir Francis Galton, a Victorian polymath, was the first to note the wisdom of the crowds at a livestock fair he visited in 1906. Fairgoers were given the opportunity to guess the weight of an ox, with the best guess winning a prize. Galton hypothesised that not one person would get the answer right, but that everyone would get it right. It’s not as contradictory as it sounds. Over 750 participants made their guesses and unsurprisingly no one guessed the weight perfectly. However, when Galton calculated the mean average of all of the guesses, incredibly, it turned out to be the exact weight of the ox: 1 198 pounds.

The basic idea of the wisdom of the crowds is that the average of the answers of a group of individuals is often more accurate than the answer of any one individual, as in Galton’s experiment. The wisdom of the crowds’ accuracy increases with the number of participants and the diversity of the expertise of each individual participant.

So what does the wisdom of the crowds have to do with economic forecasting? Remember all of that noise that makes economic forecasting so difficult and affects accuracy? The theory is that idiosyncratic noise is associated with any one individual answer and by taking the average of multiple answers the noise tends to cancel itself out, presenting a far more accurate picture.

Sometimes also referred to as simply combining forecasts, the consensus forecast borrows from the same idea of Galton’s wisdom of the crowds. It is essentially the average of forecasts from various sources. A great deal of empirical research over the last few decades shows that averaging multiple forecasts cancels out the statistical noise to yield a more accurate forecast. That said, it is possible for an individual forecast to beat the consensus, but, it is unlikely that the same forecaster will consistently do so one forecast period after another. Moreover, those individual forecasts that do happen to beat the consensus in one period are impossible to pick out ahead of time since they vary significantly from period to period.

A practical example shows the advantages of the consensus forecast. The Consensus Forecast for Malaysia’s 2015 GDP taken in January 2015 was 5.1%. In March 2016, the actual reading came out at 5.0%. As expected, a few forecasts were closer to the end result than the Consensus, but as already mentioned, it would be impossible to know which forecasts those will be until after the fact. Another way to look at it is to compare different individual forecasts with what actually happened, as we did for 25 economic analysts’ forecasts for Malaysia’s 2015 GDP in January of 2015. By March 2016, the maximum forecast from this group turned out to be 16% above the actual reading with the minimum 10% below it. The consensus was only 1.9% above the actual reading. By taking the average of all forecasts, the upside and downside errors of the different forecasts mostly cancelled each other out. As a result, the consensus forecast was much closer to the actual reading than the majority of the individual forecasts.

Whether they are consensus forecasts or individual forecasts or any other kind of forecast, predicting the future is seldom going to be perfect. In the Malaysia example, the Consensus wasn’t spot on, but it did certainly reduce the margin of error. There is almost always going to be some error, but reducing that error is the key, and more often than not, it will result in a more accurate forecast. The consensus not only reduces the margin of error, it also provides some consistency and reliability. The forecasts from individual analysts can vary significantly from one to another, whereas the consensus will consistently provide accurate forecasts.

Forecasting is a science, but it isn’t an exact science. They may not be perfect, but forecasts are still very important to businesses and governments, as they shed light on the future, helping them to make vital decisions on strategy, plans and budgets. So, should you trust forecasts? True, forecasting is complicated and, yes, forecasts are notoriously inaccurate and there are few ways to consistently improve forecast accuracy. The point is, however, that forecasts don’t necessarily need to be perfect to be useful. They just need to be as accurate as possible. One such way to do so is leveraging the wisdom of a crowd of analysts to produce a consensus forecast.

As French mathematician, physicist and philosopher Henri Poincaré put it, "It is far better to foresee even without certainty than not to foresee at all." The consensus forecast is a more accurate way to “foresee.”

Useful links

The original article on OECD Insights, including links and supplementary material, can be found here: http://wp.me/p2v6oD-2Mn

The full series can be found here: http://oecdinsights.org/?s=NAEC+complexity