The price is determined by the market, what we try to do is to make the market balanced. Today there is disequilibrium between supply and demand. Today we are trying to get the market to the normal equilibrium and the price will take care of itself.—Ali Al-Naimi, Saudi Arabia’s Oil Minister
The principal casualty of separating economics from politics is the theory of capital. Academic departmentalization placed it firmly in the hands of economists, leaving political scientists, sociologists and anthropologists with practically no say. The economists have chosen to emphasize material considerations and to all but ignore power, yet that choice has hardly cleared the water. In fact, despite their complete monopoly, economists have been unable to even define what capital means.
While all agree that capital is monetary wealth, figuring out what makes this wealth grow has proven much harder. ‘What a mass of confused, futile, and downright silly controversy it would have saved us’, writes Joseph Schumpeter (1954: 323), ‘if economists had had the sense to stick to those monetary and accounting meanings of the term instead of trying to “deepen” them!’
Of course, the problem lies not in the desire to deepen, but in the direction economists have gone digging. Their main goal has been to link accumulation with productivity, but with social production having grown in complexity the link has become increasingly difficult to pin down. And the difficulty persists precisely because economists insist it is exclusively theirs. According to Bliss (1975), once economists agree on the theory of capital, ‘they will shortly reach agreement on everything else’. But then how could they agree on this theory, if capital, by its very nature, involves power, which they view as lying outside their domain?
The material basis of capital
Despite their pivotal significance, the definition of capital and the meaning of accumulation remain unsettled.45 Historically, the principal contention stemmed from trying to marry two different perceptions of capital — one quantitative, the other qualitative. Originally, capital was seen as an income-generating fund, or ‘financial wealth’, and as such it had a definite quantity. It was also viewed as a stock of physical instruments, or ‘capital goods’, characterized by a particular set of qualities (Pasinetti and Scazzieri 1987). The key question concerned the connection between these two incarnations: are ‘capital goods’ productive, and if so, how does their productivity affect their overall magnitude as ‘capital’? (Hennings 1987).
Mainstream economics has been trying to show that capital goods indeed are productive and that this positive attribute is what makes capital as a fund valuable. The marriage, though, hasn’t work well, partly due to a large age difference: the concept of capital predates that of capital goods by a few thousand years, suggesting that their overlap is not that self-evident.
The older partner, capital, comes from the Latin caput, a word whose origin goes back to the Fertile Crescent in the Middle East. In both Rome and Mesopotamia capital had a similar, unambiguous economic meaning: it was a monetary magnitude. There was no relation to produced means of production. Indeed, caput meant ‘head’, which fits well with another Babylonian invention — the human ‘work day’ (Schumpeter 1954: 322–23; Bickerman 1972: 58, 63).
The younger partner, capital goods, was born millennia later, roughly together with capitalism. The growing significance of mechanized instruments captured the attention of pre-classical writers, but initially these were referred to mostly as ‘stocks’ (Barbon 1690; Hume 1752). The first to give capital a productive role were the French Physiocrats, and it was only with François Quesnay and Jacques Turgot during the latter half of the eighteenth century that the association between capitals (as monetary advances) and mechanized production started to take shape (Hennings 1987).
Since then, the material–productive bias has grown ever more dominant. Thus, Adam Smith speaks of ‘stocks accumulated into capital’ which is ‘necessary for carrying on this great improvement in the productive powers of labour’ (1776: 260–61); similarly with David Ricardo, who equates capital with ‘that part of the wealth of a country which is employed in production, and consists of food, clothing, tool, raw materials, machinery, &c. necessary to give effect to labour’ (1821: 85); Karl Marx talks about ‘constant capital’ represented ‘by the means of production, by the raw material, auxiliary material and instruments of labour’ (1909, Vol. 1: 232); John Bates Clark asserts that ‘Capital consists of instruments of production’, which ‘are always concrete and material’ and whose appearance as value is ‘an abstract quantum of productive wealth’ (1899: 116, 119); Irving Fisher takes capital as equivalent to the prevailing stock of wealth (1906: 52); Frank Knight sees capital as ‘consisting of non-human productive agencies’ (1921: 328); while Arthur Pigou conceives of capital as a heterogeneous material entity ‘capable of maintaining its quantity while altering its form’ (1935: 239). Summing it all up, Joseph Schumpeter naturally concludes that, in its essence, ‘capital consisted of goods’, and specifically of ‘produced means of production’ (1954: 632–33). No matter how you twist and turn it, fundamentally capital is a material entity.
Yet, the classical political economists did not have a complete theory of capital. Recall that these new scientists of society started to write when industrialization was still in its infancy, smokestack factories were few and far between, and the population was still largely rural. They therefore tended to treat the amalgamate of capital goods as a ‘fund’ or ‘advance’ whose role, in their words, is merely to assist the ‘original’ factors of production — labour and land. Although the general view was that capital goods were valuable due to their productivity, no attempt was made to quantify their ‘amount’. The link between capital goods and capital therefore was left unspecified.
In hindsight, the principal obstacle in establishing this link was that the classicists still viewed capital goods as a secondary input, and in that sense as qualitatively different from the original primary inputs. This belief, though, proved no more than a temporary roadblock.
The production function
Taking the classical lead but without its associated inhibitions, the neoclassicists followed the Earl of Lauderdale (Maitland 1804) in making capital goods a fully independent factor of production, on par with labour and land. Their view of capital, articulated since the latter part of the nineteenth century by writers such as Phillip Wicksteed, Alfred Marshall, Carl Menger and, primarily, John Bates Clark, emphasized the distinct productivity of capital goods, and by so doing elevated these goods from mere accessories to requisites.
The two assumptions
In his book, The Distribution of Wealth (1899), Clark used this newly found symmetry among the factors of production to offer an alternative theory of distribution. The theory stipulates a two-step mathematical link between income and production.
The first step asserts the existence of a ‘production function’. The level of output, Clark argued, is a function of quantifiable ‘factors of production’, each with its own distinct productive contribution. This assertion assumes that labour, land and capital are observable and measurable (so for instance, we can see that production uses 20, 10 and 15 units of each factor, respectively); it argues that the way these factors interact with one another in production is similarly straightforward (so we know exactly what factors enter the production process and how they affect the productivity of all others factors); and it posits that we can associate definite portions of the output with each of the factors (for example, labour contributes 40 per cent, land 15 and capital goods 25).
The second step uses the production function to explain the distribution of income. Clark claimed that, under conditions of perfect competition (‘without friction’, in his words), the income of the factors of production is proportionate to their contributions — or more precisely, to their marginal contri butions (so that the wage rate is equal to the productive contribution of the last worker added to production, the rent is equal to the contribution of the last hectare of land, and the profit rate is equivalent to the contribution of the last unit of capital).
Where does profit come from?
Formulated at a critical historical junction, the new theory combined a powerful justification with a seemingly solid explanation. And there was certainly need for such theory. The emergence of US big business during the latter part of the nineteenth century accelerated the centralization of capital, raised profit margins and heightened income inequality — much along the lines anticipated by Karl Marx — and these developments made earlier profit theories look hopelessly irrelevant.
Chief among these theories were the notions of ‘abstinence’ as argued by Nassau Senior (1872) and of ‘waiting’ as stipulated by Alfred Marshall (1920). According to these explanations, capitalists who invest their money are abstaining from current consumption and therefore have to be remunerated for the time they wait until their investment matures. By the end of the nineteenth century, though, the huge incomes of corporate magnates, such as Rockefeller, Morgan and Carnegie, enabled them to consume conspicuously regardless of how much they ploughed back as investment; and even when these magnates chose to be frugal, usually they did so in order to augment their power, not their future consumption.
Other theorists, such as Herbert Spencer (1904), William Sumner (1920; 1963) and later Ayn Rand (1966), took a more ‘biological’ path, claiming that profit was simply due to superior human traits. This version of financial Darwinism was happily underwritten by large US capitalists eager to make their blood turn blue. But the basic theoretical and moral problem remained.
Even if this ‘science of remuneration’ were all true, and even if capitalists indeed were superhumans whose waiting in abstinence had to be compensated, the magnitude of their remuneration remained unexplained. Why should the pay on their investment be 20 per cent rather than 5 or 50? What caused the return to fluctuate over time? And why did some capitalists win the jackpot while others, despite their merit and patience, ended up losing money? Clearly, there was a pressing need for a more robust ideology, and this is where Clark’s theory of marginal productivity came into the picture.
Contrary to the Marxist claim, Clark insisted that capital is not in the least parasitic: much like labour, it too receives its own marginal productivity, an income which therefore is essential for the growth process. Indeed, since income is proportionate to productive contributions, it is rather clear that capitalists, through their ownership of capital, in fact are more productive than workers. That must be so — for otherwise, why would their earnings be so much greater?46
The birth of ‘economics’
The marginal productivity theory enabled neoclassicists to finally remove their classical shackles and finish the liberal project of de-politicizing the economy. The classicists, whether radical or liberal, were interested primarily in well-being and distribution. Production was merely a means toward those higher ends. Clark helped reverse this order, making distribution a corollary of production. And indeed, since the turn of the twentieth century, attention has gradually shifted from the causes of income inequality to its ramifications, a subject economists felt they could safely delegate to sociologists and political scientists. The old ideological disputes of political economy were finally over. From now on, announced Alfred Marshall, we have a new science: the science of economics — complete with both the rigour and suffix of mathematics and physics.
In fact, Clark and his contemporaries not only de-politicized the economy, they also ‘de-classed’ it. Instead of workers, capitalists, rentiers and the state — differentiated entities whose struggles loomed large in the classical canons — the neoclassical landscape is populated by abstract individuals — ‘actors’ who can choose to be workers one day, capitalists the next, and voluntarily unemployed the day after. These individuals live not in society as such, but in a special space called the ‘economy’. Their sole preoccupation is to rationally maximize pleasure from consumption and minimize the pain of work. Indeed, society for them is an external and largely irrational sphere which constantly threatens to prevent their ‘economy’ from reaching its collective orgasm of Pareto Optimality.47
With so much going for it, the marginal productivity theory was quickly endorsed by professional economists — and, of course, by their captains of finance. The latter used and abused the theory partly for what it said, but especially for what it didn’t.
Marginal productivity theory in historical context
It turns out that Clark’s theory of distribution could say very little about the reality in which it developed — and this for a simple reason: the theory rose to prominence precisely when the conditions necessary for it to work disappeared.
Recall that, in liberal theology, everyone is equal before mother competition and father market. No single person can affect the market outcome. This conclusion (or assumption) is formalized in neoclassical manuals through the properties of demand and supply. Individual consumers are said to face a supply curve which they cannot alter (a flat schedule equal to the market price); similarly, individual producers confront a flat demand curve over which they have no control (also equal to the market price). This convenient setting makes the market demand and supply independent of each another. And this independence in turn leads to a unique equilibrium — a spontaneous yet stable ‘one dollar, one vote’ democracy.
The end of equilibrium
By the beginning of the twentieth century, however, when this vision became the canon, the assumption of independent supply and demand — as well as of the autonomy of consumers, the anonymity of sellers and the absence of government — was no longer tenable. It turned out that perfectly competitive equilibrium was no longer possible, even on paper.
There were three basic reasons for this impossibility. First, oligopoly substituted for ‘free’ competition, and that changed pretty much everything. In oligopolistic markets sellers become inter-dependent, and this interdependence — even if we pretend that consumers remain fully rational, knowledgeable and autonomous — makes the individual firm’s demand curve indeterminate. In the absence of a clear demand curve, firms don’t know how to maximize income. And without this unambiguous yardstick for action the oligopolistic market as a whole becomes clueless, lacking a clear equilibrium point on which to converge.48
Second, by the end of the nineteenth century there emerged an obvious asymmetry between the buy and sell sides. While individual consumers remained powerless to alter market conditions and therefore had to obey them, the giant corporation enjoyed far greater flexibility. For large firms, the ‘demand curve’ was no longer an external condition given by sovereign consumers, but rather a malleable social context to be influenced and shaped relentlessly as part of the firm’s broader investment and pricing strategy.49 Of course, in public big business continued to talk about the ‘market discipline’ to which it was presumably subjugated. But that was doublespeak, the use of mythical forces to conceal real power. In private, large firms saw themselves not only as ‘price makers’, but also as ‘market makers’.50
These two developments marked the end of spontaneous equilibrium. Originally, economics had two unknown variables and two independent functions with which to explain them: jointly solving the demand and supply equations yielded unique values for price and quantity. But with the introduction of power into the picture, demand became dependent on, if not subsumed by, supply, leaving economics with only one (combined) equation to explain the two unknowns. In one swoop, economics lost its ‘degree of freedom’. From now on, there could be any number of price/quantity outcomes, all perfectly consistent and none necessarily stable. And if the real world did appear stable, the reason was not the invisible hand of the market but the visible hand of power.51
The third development, which we already alluded to in previous chapters, was the rise of big government and, later, of active economic policy. This development presented another serious difficulty for mainstream economics. On the one hand, large governments have become integral if not necessary to the process of capital accumulation. On the other hand, their existence has ‘contaminated’ economics with power, annulling the invisible hand and leaving the notion of spontaneous equilibrium hanging on the thread of denial.
As noted in Chapter 4, economists have partly managed to ignore this dilemma by keeping the study of macroeconomics as separate as possible from microeconomics, nested uneasily in what Paul Samuelson called the ‘neoclassical synthesis’ and John Ruggie later labelled ‘embedded liberalism’. But that wasn’t enough. It was also necessary to ascertain that the public sector, no matter how large and active, remained subservient to the logic of laissez faire individualism and the interests of its large capitalists.
This requirement was greatly assisted by the founding of ‘public management’ — a new social science that would imitate, at least nominally, the principles of Frederick Taylor’s ‘scientific management’ (1911). We say nominally, since unlike the so-called scientific management of private enterprise, public management lacked from the beginning any clear yardstick for success.
The dilemma is simple. The administration of business, neoclassicists argue, is guided by a single goal: the maximization of profit. According to Clark’s marginal productivity theory, the more profit the corporation earns, the greater the well-being it must have generated, by definition. Public administration, though, has no comparable rule. Since public services commonly do not have a market price, and since public officials do not normally profit from the services they administer, there is no ‘natural’ way to tell how much utility is being generated. Unlike in the private sphere, here you have to decide what the utility is. But then utility is subjective, so how can public administrators ever hope to mimic the objective market?
There are two solutions to the problem. The first, used mostly to justify higher budgets, is to keep a straight face, pretend that the public sector behaves just like a free market, and subject it to the neoclassical mechanics of indifference curves, budget lines, production functions and possibilities frontiers. The absurd, albeit politically effective, outcome of this venue is best illustrated by marginalist analyses of ‘national defence’ — the arena most removed from the neoclassical fantasy land (Hitch and McKean 1960). The second solution, ideal for periods of ‘belt tightening’, is to simply reiterate the basic maxim of liberalism: if in doubt, minimize. Since the public sector is a necessary evil, an authoritarian wasteful institution whose sole purpose is to ameliorate temporary ‘market failure’ and counteract communist ideology, the best yardstick for its success is the extent to which we can limit its intervention.52
Many of the key institutions of the welfare state were originally established and run by people schooled in and conditioned by neoclassical maxims. Although they all spoke of ‘public policy’ and ‘government intervention’, they tended to think of such activities as necessary ‘distortions’. With a few exceptions, mostly in the Nordic countries, there was never a systematic attempt to develop the public sector into a humane form of democratic planning.
And it is not as if the possibility wasn’t there. As a new discipline — and one that emerged after the chaos and misery of the Great Depression and two world wars — public management could have opened the door to new ways of thinking about and organizing society. It could have introduced truly democratic budgeting, new ways of assessing public projects, new frameworks for ecological planning, new pension schemes, a new architecture for public credit, mass housing for the benefit of inhabitants rather than contractors, effective public transportation, non-neoclassical conceptions of intergenerational transfers, and perhaps even a new, democratic theory of value.
But that never happened. Instead of transcending neoclassical dogma and the business creed, much of the post-war effort went into making sure people didn’t even think in such directions. As a rule, public-sector salaries were kept low, public processes were presented as inefficient and corrupt, the status of public activity was demeaned and public officials were commonly criticized and mocked. And indeed, once communism collapsed in the late 1980s, government officials and politicians around the world seemed all too eager to dismantle their own welfare states. The neoclassical Trojan horse has achieved its purpose. The commanding heights again are controlled by the free marketeers. It is as if the Great Depression had never happened.
The best investment I ever made
And so, although Clark’s distribution theory was out of sync with the new reality of power, paradoxically, it has proven immensely useful in both concealing and manipulating that very reality. The theory helps protect the belief in perfect competition despite the massive gravitational force created by large governments and big business. It helps hide the fact that oligopolistic interdependence nullifies the notion of spontaneous equilibrium while simultaneously enabling oligopolies to mould their consumers and impose their own outcomes. And it helps use the public sector for capitalist ends while prevent ing that sector from ever generating a democratic alternative to capitalism.
Given these tall achievements, it is only fitting that the most generous sponsors of this ideology were no other than the Rockefellers — a family whose members invented every possible trick in limiting competition and output, in using religious indoctrination for profitable ends, in rigging stock prices and bashing unions, in enforcing ‘free trade’ while helping friendly dictators, in confiscating oil-rich territories and in uprooting and destroying indigenous Indian populations (Colby and Dennett 1995). The clan’s founder, John D. Rockefeller, donated $45 million to establish the Baptist University of Chicago, where Clark’s production function would later become gospel. Eventually, Chicago became the bastion of neoclassical economics — and the neoclassical economists in turn helped make Rockefeller and his like invisible. According to Rockefeller’s own assessment, ‘it was the best investment I ever made’ (Collier and Horowitz 1976: 50).53
Some very unsettling questions
The difficulty with Clark’s logic, though, goes much deeper than indicated in the previous section. In fact, even if we ignore the external reality of power and assume away governments, oligopolies and all other contaminating factors, the theory still doesn’t stand.
The quantity of capital
The central problem, identified already by Wicksell at the turn of the century, is the very ‘quantity’ of capital (Wicksell 1935, Vol. 1: 149, originally published in 1901–6). According to received convention, a given capital usually is associated with different types of capital goods. This heterogeneity means that capital goods cannot be aggregated in terms of their own ‘natural’ units.54 The only way to ‘add’ a machine producing aircraft parts to one making shoes to another making biscuits is by summing their values measured in money. The money value of any capital good — that is, the amount investors are willing to pay for it — is the present value of its expected future profit (computed by discounting this profit by the prevailing rate of interest, so value = expected profit / rate of interest).55
Now, as long as our purpose is merely to measure the money value of capital, this method is hardly problematic and is indeed used regularly by investors around the world. The difficulty begins when we interpret such value as equivalent to the ‘physical’ quantity of capital.
To see the problem, suppose that the rate of interest is 5 per cent and that a given machine is expected to yield $1 million in profit year after year in perpetuity. Based on the principle of present value, the machine should have a physical quantity equivalent to $20 million (= $1 million / 0.05). But then what if expected profit were to go up to $1.2 million? The present value should rise to $24 million (= $1.2 million / 0.05) — yet that would imply that the very same machine can have more than one quantity! And since a given machine can generate many levels of profit, there is no escape from the conclusion that capital in fact is a ‘multiple’ entity with an infinite number of quantities. . . .
As it turns out, Clark’s productivity theory of distribution is based on a circular notion of capital: the theory seeks to explain the magnitude of profit by the marginal productivity of a given quantity of capital, but that quantity itself is a function of profit — which is what the theory is supposed to explain in the first place! Clark assumed what he wanted to prove. No wonder he couldn’t go wrong.
These are logical critiques. Another perhaps more substantive social challenge to the notion of physical capital came around the same time from Thorstein Veblen, to whom we turn in Chapter 12. Yet, for almost half a century Clark’s theory remained resilient, and it was only during the 1950s that the early criticism against it began to echo.
The first shots were fired by Joan Robinson (1953–54) and David Champernowne (1953–54), followed by the publication of Pierro Sraffa’s seminal work, Production of Commodities by Means of Commodities (1960). Sraffa’s book, which was forty years in the making, had only 99 pages — but these were pages that shook the world, or at least they should have. In contrast to earlier sceptics who rejected the ‘quantity of capital’ as a circular concept, Sraffa began by assuming that such a quantity actually existed and then proceeded to show that this assumption was self-contradictory. The conclusion from this contradiction was that the ‘physical’ quantity of capital — and, indeed, its very objective existence — was a fiction, and therefore that productive contributions could not be measured without prior knowledge of prices and distribution — the two things that the theory was supposed to explain in the first place.
Sraffa’s attack centred on the alleged connection between the quantity of capital and the rate of interest. As noted, because capital goods are heterogeneous, neoclassicists have never been able to directly aggregate them into capital. But this aggregate, they’ve argued, nonetheless can be quantified, if only indirectly, by looking at the rate of interest.
The logic runs as follows: the higher the rate of interest — everything else being the same — the more expensive capital becomes relative to labour, and hence the less of it that will be employed relative to labour. According to this view, the ‘capital intensity’ of any productive process, defined as the ratio between the (indirectly observable) quantity of capital and the (directly observable) quantity of labour, should be negatively related to the rate of interest: the higher the rate of interest, the lower the intensity of capital, and vice versa. Of course, the relationship must be unique, with each ‘capital intensity’ associated with one and only one rate of interest. Otherwise, we end up with the same capital having more than one ‘intensity’.
And yet that is exactly what Sraffa found.
His famous ‘reswitching’ examples demonstrated that, contrary to neoclassical theory, ‘capital intensity’ need not have a unique, one-to-one relationship with the rate of interest. To illustrate, consider an economy with two technologies: process X, which is capital intensive, and process Y, which is labour intensive (i.e. less capital intensive). A rise in the rate of interest makes capital expensive relative to labour and, according to neoclassical theory, should cause capitalists to shift production from X to Y. However, Sraffa showed that if the rate of interest goes on rising, it is entirely possible that process Y once again will become the more costly, causing capitalists to ‘reswitch’ back to X. Indeed, since usually there are two or more ways of producing the same thing, and since these methods are almost always qualitatively different in terms of the inputs they use and the way they combine them over time, reswitching is not the exception, but the rule.56
The result is a logical contradiction, since, if we accept the rate of interest as an inverse proxy for capital intensity, X appears to be both capital intensive (at a low rate of interest) and labour intensive (at a high rate of interest). In other words, the same assortment of capital goods represents different ‘quantities’ of capital. . . .
The consequence of Sraffa’s work was not only to leave profit in search of an explanation, but also to rob capital goods — the basis of so much theorizing — of any fixed magnitude.
The Cambridge Controversy
These writings marked the beginning of the famous ‘Cambridge Controversy’, a heated debate between Cambridge, England, where Robinson and Sraffa taught, and Cambridge, Massachusetts, the home of many neoclassical economists (the controversy is summarized in Harcourt 1969; 1972). Eventually, the neoclassicists, led by towering figures such as Nobel Laureate Paul Samuelson, conceded that there was a problem, offering to treat Clark’s neoclassical definition of capital not literally, but as a ‘parable’ (Samuelson 1962). A few years later, Charles Ferguson, another leading neoclassicist, admitted that because neoclassical theory depended on ‘the “thing” called capital’ (1969: 251), accepting that theory in light of the Cambridge Controversy was a ‘matter of faith’ (pp. xvii–xviii).57
Yet faith was hardly enough. The realization that capital does not have a fixed ‘physical’ quantity set off a logical chain reaction with devastating consequences for neoclassical theory. It began by destroying the notion of a production function which, as we noted, requires all inputs, including capital, to have measurable quantities. This destruction then nullified the neoclassical supply curve, a derivative of the production function. And with the supply curve gone, the notion of equilibrium — the intersection between supply and demand — became similarly irrelevant. The implication was nothing short of dramatic: without equilibrium, neoclassical economics fails its two basic tasks of explaining and justifying prices and quantities.
Clearly, this was no laughing matter. For neoclassical theory to continue and hold, the belief that capital is an objective–material thing, a well-defined physical quantity with its own intrinsic productivity and corresponding profitability, had to be retained at all costs. And so the rescue attempts began.
The first and most common solution has been to gloss the problem over — or, better still, to ignore it altogether. And as Robinson (1971) predicted and Hodgson (1997) confirmed, so far this solution seems to be working. Most economics textbooks, including the endless editions of Samuelson, Inc., continue to ‘measure’ capital as if the Cambridge Controversy had never happened, helping keep the majority of economists — teachers and students — blissfully unaware of the whole debacle.
A second, more subtle method has been to argue that the problem of quantifying capital, although serious in principle, has limited practical importance (Ferguson 1969). However, given the excessively unrealistic if not impossible assumptions of neoclassical theory, resting its defence on real-world relevance seems somewhat audacious.
The third and probably most sophisticated response has been to embrace disaggregate general equilibrium models. The latter models try to describe — conceptually, that is — every aspect of the economic system, down to the smallest detail. The production function in such models separately specifies each individual input, however tiny, so the need to aggregate capital goods into capital does not arise in the first place.
General equilibrium models have serious theoretical and empirical weaknesses whose details have attracted much attention.58 Their most important problem, though, comes not from what they try to explain, but from what they ignore, namely capital. Their emphasis on disaggregation, regardless of its epistemological feasibility, is an ontological fallacy. The social process takes place not at the level of atoms or strings, but of social institutions and organizations. And so, although the ‘shell’ called capital may or may not consist of individual physical inputs, its existence and significance as the central social aggregate of capitalism is hardly in doubt. By ignoring this pivotal concept, general equilibrium theory turns itself into a hollow formality.59
The measure of our ignorance
Of course, ignoring problems does not solve them. The inconvenience is evident most vividly in empirical neoclassical studies, in which production functions are used to explain changes in output. The results of such studies are usually highly disappointing. Commonly, only part of the output variations — and often only a small part — is explained by the ‘observed’ variations of the inputs, leaving a sizeable ‘residual’ hanging in the air (a term commonly attributed to Solow 1957).
As we elaborate later in the book, one possible reason for this failure is that production is a holistic process and hence cannot be expressed as a function of individual inputs in the first place. Neoclassicists do not even consider this possibility. Instead, they prefer to circumvent the problem by separating inputs into two categories — those that can be observed, namely labour, land and capital, and those that cannot, lumped together as technology, or ‘total factor productivity’. This by-pass, suggested by Marshall (1920) and popularized by Galbraith (1958; 1967) and Drucker (1969), enables neoclassicists to avoid the embarrassment of a large output residual. To paraphrase Henri Poincaré, this residual is simply a ‘measure of our ignorance’ (Abramovitz 1956: 10). The problem, they argue, is not theoretical but practical. It lies not in the production function but in the fact that we do not know how to measure technology. Did we know how, and could we incorporate the ‘quantity’ of technology into the production function, the residual most surely would disappear.60
Unfortunately, this phlogiston-like argument is only too convenient, in that it can never be falsified, let alone verified. Theories that claim to explain reality should be tested on how well they do so — the smaller the ‘error’, the more convincing the theory.61 Here, however, the problem is not the theory but the facts, so the error does not matter. . . .62
The victory of faith
Neoclassical theory remains an edifice built on foundations of sand. The most questionable of these foundations is the notion that capital is a material entity, measurable in physical units and possessing its own intrinsic productivity. In fact, capital fulfils none of these requirements. The result is that the theory is unable to convincingly explain not only the structure of prices and production, but also the distribution of income which supposedly results from such structure.
In The Structure of Scientific Revolutions, Thomas Kuhn (1970) claimed that the accumulation of anomalies in science tends to engender a paradigmatic breakdown, opening the door to new theories and, eventually, to a new paradigm. Nothing of the sort has ever happened to neoclassical political economy. Although suffering from deep logical contradictions and serious empirical anomalies, neoclassical theory hasn’t broken down. On the contrary, it has only grown stronger. Its overall structure has remained more or less intact for more than a century — a feat unparallel by any other science — and it has managed, with the help of massive business and government subsidies, to strangle pretty much all of its theoretical competitors.
But then, this victory shouldn’t surprise us, simply because neoclassical political economy is not a science, but a church. And like every church with forged scriptures, the neoclassical priests go on with their daily business, spreading the faith by building ‘elegant-seeming arguments in terms which they cannot define’ and searching for ‘answers to unaskable questions’ (Robinson 1970: 317).
For the deep sense of unease regarding the definition of capital, see Schumpeter (1954: 322–27) and Braudel (1977; 1985, Vol. 2: 232–49).↩
The productive origin of profit is now an all prevalent logic. A typical example is provided by the recent debate over the resurgence of leverage buyouts. Supporters of the trend argue that private firms are more productive than those listed on the stock market, while its critics maintain that delisting is driven by manipulators and asset strippers. ‘So who is right?’ asks Martin Wolff, chief economist of the Financial Times. ‘An obvious answer is that private equity is a growing activity in which willing sellers meet willing buyers. If it prospers, it must be profitable. If it is profitable, it should also be adding value’ (Wolf 2007). This rule can be frustrated by ‘market imperfections’, but its underlying logic is taken as self-evident.↩
Pareto Optimum, a neoclassical mantra named after the Italian thinker Vilfredo Pareto, refers to a situation in which no individual can be made better off without another individual becoming worse off. This situation is said to exist when the overall pie cannot be made any bigger — that is, when the economy works at full employment and in maximum efficiency. Of course, since no one has thus far been able to identify such an Optimum, the mantra is of little practical significance. But its ideological importance is considerable: if maximum output is already optimal, why worry about its distribution?↩
To illustrate this conclusion with a hypothetical example, consider a price change by Nokia. This price change will elicit responses from Nokia’s oligopolistic rivals, such as Motorola, Samsung and Panasonic, and these responses will in turn change the demand curve for Nokia’s own product. Since the direction and magnitude of these responses are open-ended, the eventual position of Nokia’s demand curve becomes unclear. And given that Nokia cannot foretell this eventual position, it cannot know the profit maximizing price and therefore cannot know how to act. Game theorists have managed to solve this problem a million times over — but only by imposing predetermined theoretical rules that real oligopolists such as Nokia and its rivals are perfectly free to ignore.↩
The concept of ‘consumer sovereignty’ also depreciated due to the immense increase in the complexity of production and consumption. We can perhaps fathom an independent farmer in the United States of the mid-nineteenth century assessing the marginal benefit of growing corn and cabbage instead of beets and tobacco, or of a slave hunter contemplating the marginal rate of substitution between the income from seizing an additional escapee and a week of idle leisure. But these types of computations are rather difficult to make in a world loaded with millions of different commodities and endless ‘choices’. It is no wonder that instead of the individualist ethos of the nineteenth century we now refer to consumers as ‘masses’ and to investors — even the most sophisticated — as ‘herds’.↩
These concepts were already part of common business parlance at the turn of the twentieth century. For early analyses of firms as ‘price makers’, see the works of Brown (1924) on General Motors, of Means (1935a) on administrative prices and of Hall and Hitch (1939) on business behaviour. On the broader politics of ‘market making’, see Kaplan et al. (1958). The nineteenth-century precursors of anti-market corporatism (including the young J. B. Clark) are examined in Perelman (2006). The power aspects of pricing will be examined in Chapter 12.↩
For those who care to read further, we should add that demand and supply are unlikely to be independent of each other even in the absence of power. The basic reason is that any change in the supply price of a given commodity redistributes income between buyers and sellers of that commodity. This redistribution in turn shifts the respective demand curves of those buyers and sellers. And since different buyers have different preferences, the redistribution of income works to alter the overall market demand curve. This simple logic implies that movements along the supply curve are accompanied by shifts of the demand curve — leading not to one, but to multiple equilibria.
Neoclassical economists solve this problem by making two assumptions. First, they ask us to forget about the liberal ideal of individual freedom and think of all consumers as drones, each one identical to the ‘representative consumer’ and therefore possessing the same set of preferences. Second, they ask us to further believe that these drones have a mental fix, such that the proportion of their income spent on various items is independent of their income level (a consumer spending 30 per cent on food when her annual income is $10,000 will also spend 30 per cent on food when her income is $10 million). These two assumptions — known as the Sonnenshein–Mantel–Debreu conditions — indeed imply that redistribution of consumer income leaves the market demand curve unchanged. But since these assumptions are patently impossible, they also imply that neoclassical consumer theory has practically nothing to say about any real world situation.↩
The inferior status of the public sector is evident in the theory and practice of the national accounts. In these accounts, government expenditures on goods and services are (reluctantly) treated on par with private spending: both are considered additions to GDP. But the treatment of interest payments — paid usually to capitalist owners of the public debt — is different. Interest on private debt (to finance the production of cosmetics, cigarettes and fast food, say) is counted as payment for a productive service and is therefore made part of the national income. By contrast, interest on the public debt (for instance, to finance education, war and health services) is considered a mere ‘transfer’, a one-sided transaction like welfare or unemployment insurance payments. Presumably, no service is being rendered, and therefore there is no reason to treat such interest payments as part of the national income.↩
Chicago, like most other universities, was only too happy to serve the new capitalist benefactors:
In the world of learning, the janissaries of oil or lard potentates, with a proper sense of taste and fitness, sought consistently to sustain the social structure, to resist change, to combat all current notions which might thereafter “reduce society to chaos” or “confound the order of nature”. As a class, they shared with their patrons the belief that there was more to lose than to gain by drastic alterations of the existing institutions, and that it was wisest to “let well enough alone”. While ministers of the Baptist Church defended the Trusts as “sound Christian institutions” against “all these communistic attacks”, the managers of Rockefeller’s Chicago University also championed the combinations year by year. . . . [One] teacher of literature ostensibly … declared Mr Rockefeller and Mr Pullman “superior in creative genius to Shakespeare, Homer and Dante”. . . . [while] Professor Bemis, who happened to criticize the action of the railroads during the Pullman strike in 1894 was after several warnings expelled from the university for “incompetence”.
(Josephson 1934: 324)
Syracuse University, endowed by Mr John Archbold of the Standard Oil combine, similarly dismissed John Commons, a young economics instructor who revealed too strong an interest in the rising labour movement (p. 325).↩
Although labour and land are not homogenous either, their heterogeneity is fundamentally different from that of capital goods. The so-called quality of labour can be moulded through education, whereas land can be improved through cultivation. Capital goods, in contrast, are rarely that supple, and once made they can seldom be converted for new tasks. This difference, though, doesn’t get labour off the hook. As we shall see later in our discussion of Marx, the transformation of labour also faces insurmountable aggregation problems.↩
As we have already mentioned in Chapter 1 and will elaborate further in Chapters 9 and 11, the discounting formula is more complicated, having to take into account factors such as varying profit flow, end-value and risk perceptions. These additional factors can be ignored for our purpose here.↩
The qualitative differences between production techniques generate inflections that make reswitching possible. For a clear exposition of how reswitching works and why it would tend to infect neoclassical production models (save for those protected by special assumptions), see Hunt (1992: 536–48).↩
These machinations are typical of a faith in trouble. In The Birth of Europe, Robert Lopez describes similar challenges to the Christian dogma during the twilight of feudalism — along with the Church’s response:
In general, faith is still sufficiently adaptable in the thirteenth century for the great majority of Catholic thinkers to feel no more bound by its dogmas than the confirmed liberal or marxist today feels fettered by the basic principles of liberalism or marxism. . . . Conflict between faith and reason cannot always be avoided but in most cases it is successfully solved by an allegorical interpretation of the sacred writings. . . . St. Augustine … suggested that ‘if we happen across a passage in Holy Scripture which lends itself to various interpretations, we must not … bind ourselves so firmly to any of them that if one day the truth is more thoroughly investigated, our interpretation may collapse, and we with it’. The early mediaeval writers, as we have seen, seized upon allegory with the typical enthusiasm of ages poorly equipped with exact information and clear ideas. . . .
(Lopez 1967: 361–62, emphases added)
First, the production theatre becomes infinitely complex, making identification and quantification impossible. Second, without aggregation some input complementarity is inevitable, so the corresponding marginal products cannot be derived, even on paper. Third, because rationality and utility maximization alone do not guarantee downward-sloping excess demand functions, general equilibrium models need not be ‘stable’ (see for example Risvi 1994). And fourth, the theory is inherently static and hence can say little on the dynamic gist of accumulation.↩
Aware of the inherent circularity of ‘tangible’ marginalism, the Austrian economists sought to circumvent the problem altogether by substituting time for capital goods. Following Jevons (1871), who formulated his production function with time as an input, writers such as Böhm-Bawerk (1891), Wicksell (1935) and later Hicks (1973) reinterpreted capital goods as stages of a temporal production process. Capital here is counted in units of the ‘average period of production’, itself a combination of original inputs and the time pattern of their employment. In general, it is believed that ‘roundabout’ processes (which are longer, more mechanized and indirect) are more productive, and that lengthening the average period of production therefore is tantamount to raising its ‘capital intensity’.
The Austrian theory has two main drawbacks. First, it’s politically risky. The early Austrians sought to undermine the labour theory of value — but like Balaam in the Book of Numbers ended up bolstering it. Their emphasis on original inputs — to the exclusion of tangible capital goods — is dangerously close to Marx’s, something the neoclassicists have been more than eager to avoid. Second, the theory’s focus, including its link to the time preferences of consumers, remains exclusively materialistic. It tries to establish a positive relationship between an aggregate quantity of capital on the one hand and productivity/utility on the other. Its route therefore is not that different from Clark’s, and indeed this theory too falls into the ‘reswitching’ trap (on this last point, see Howard 1980; Hunt 1992, Ch. 16).↩
The hopes and frustrations of those involved in this quest are echoed in the brief history of the ‘residual’ written by true believer Zvi Griliches (1996).↩
For a discipline that takes its cue from physics, the following words of Nobel Laureate Robert Laughlin should ring loud: ‘Deep inside every physical scientist is the belief that measurement accuracy is the only fail-safe means of distinguishing what is true from what one imagines, and even of defining what true means. . . . in physics, correct perceptions differ from mistaken ones in that they get clearer when experimental accuracy is improved’ (Laughlin 2005: 15).↩
Consider two hypothetical production functions, with physical inputs augmented by technology: (1) Q = 2N + 3L + 5K + T and (2) Q = 4N + 2L + 10K + T, where Q denotes output, N labour, L land, K capital, and T technology. Now, suppose Q is 100, N is 10, L is 5 and K is 4. The implication is that T must be 45 in function (1) and 10 in function (2). Yet, since technology cannot be measured, we will never know which function is correct, so both can safely claim scientific validity.↩