Wednesday, January 01, 2020
Welcome
The emphasis on this blog, however, is mainly critical of neoclassical and mainstream economics. I have been alternating numerical counter-examples with less mathematical posts. In any case, I have been documenting demonstrations of errors in mainstream economics. My chief inspiration here is the Cambridge-Italian economist Piero Sraffa.
In general, this blog is abstract, and I think I steer clear of commenting on practical politics of the day.
I've also started posting recipes for my own purposes. When I just follow a recipe in a cookbook, I'll only post a reminder that I like the recipe.
Comments Policy: I'm quite lax on enforcing any comments policy. I prefer those who post as anonymous (that is, without logging in) to sign their posts at least with a pseudonym. This will make conversations easier to conduct.
Saturday, May 27, 2017
Some Main Points of the Cambridge Capital Controversy
For the purposes of this very simplified and schematic post, I present the CCC as having two sides.
- Views and achievements of Cambridge (UK) critics:
- Joan Robinson's argument for models set in historical time, not logical time.
- Mathematical results in comparing long-run positions:
- Reswitching.
- Capital reversing.
- Empirical results and applications.
- Rediscovery of the logic of the Classical theory of value and distribution.
- Arguments about the role a given quantity of capital in disaggregated neoclassical economic theory between 1870 and 1930.
- Arguments that neoclassical models of intertemporal and temporary equilibrium do not escape the capital critique.
- A critique of Keynes' marginal efficiency of capital and other aspects of The General Theory.
- The recognition of precursors in Thorstein Veblen and earlier capital controversies in neoclassical economics.
- Views of neoclassical defenders:
- Paul Samuelson and Frank Hahn's, for example, acceptance and recognition of logical difficulties in aggregate production functions.
- Recognition the equilibrium prices in disaggregate models are not scarcity indices; rejection of the principle of substitution.
- Edwin Burmeister's championing of David Champerowne's chain index measure of aggregate capital, useful for aggregate theory when, by happenstance, no positive real Wicksell effects exist.
- Adoption of models of inter temporal and temporary general equilibrium.
- Assertion that such General Equilibrium models are not meant to be descriptive and, besides, have their own problems of stability, uniqueness, and determinateness, with no need for Cambridge critiques.
- Samuel Hollander's argument for more continuity between classical and neoclassical economics than Sraffians see.
I think I am still ignoring large aspects of the vast literature on the CCC. This post was inspired by Noah Smith's anti-intellectualism. Barkley Rosser brings up the CCC in his response to Smith. I could list references for each point above. I am not sure I could even find a survey article that covered all those points, maybe not even a single book.
So the CCC presents, to me, a convincing demonstration, through a counter-example to Smith's argument. In the comments to his post, Robert Waldmann brings up old, paleo-Keynesian as an interesting rebuttal to a specific point.
Thursday, May 25, 2017
Some Resources on Neoliberalism
Here are three:
- Anthony Giddens, in The Third Way: The Renewal of Social Democracy (1999), advocates a renewed social democracy. He contrasts what he is advocating with neoliberalism, which he summarizes as, basically, Margaret Thatcher's approach. Giddens recognizes that more flexible labor markets will not bring full employment and argues that unregulated globalism, including unregulated international financial markets, is a danger that must be addressed. He stresses the importance of environmental issues, on all levels from the personal to international. I wish he had something to say about labor unions, which I thought had an institutionalized role in the Labour Party, before Blair and Brown's "new labour" movement.
- Charles Peters had a A Neo-Liberal's Manifesto in 1982. (See also 1983 piece in Washington Monthly.) This was directed to the Democratic Party in the USA. It argues that they should reject the approach of the New Deal and the Great Society. Rather, they should put greater reliance on market solutions for progressive ends. I do not think Peters was aware that the term "neoliberalism" was already taken. Contrasting and comparing other uses with Peters' could occupy much time.
- I have not got very far in reading Michel Foucault. The Birth of Biopolitics: Lectures at the Collège de France, 1978-1979. Foucault focuses on German ordoliberalism and the Chicago school of economics.
Anyways, neoliberalism is something more specific than any centrist political philosophy, between socialist central planning and reactionary ethnic nationalism. George Monbiot has some short, popular accounts. Read Noah Smith if you want confusion, incoherence, and ignorance, including ignorance of the literature.
Friday, May 19, 2017
Reversing Figure And Ground In Life-Like Celluar Automata
Figure 1: Random Patterns in Life and Flip Life |
I have occasionally posted about automata. A discussion with a colleague about Stephen Wolfram's A New Kind of Science reminded me that I had started this post some time last year.
This post has nothing to do with economics, albeit it does illustrate emergent behavior. And I have figures that are an eye test. I am subjectively original. But I assume somebody else has done this - that I am not objectively original.
This post is an exercise in combinatorics. There are 131,328 life-like Celluar Automata (CA), up to symmetry.
2.0 Conway's Game of LifeJohn Conway will probably ever be most famous for the Game of Life (GoL). I wish I understood monstrous moonshine.
The GoL is "played", if you can call it that, on an infinite plane divided into equally sized squares. The plane looks something like a chess board, extended forever. See the left side of Figure 1, above. Every square, at any moment in time, is in one of two states: alive or dead. Time is discrete. The rules of the game specify the state of each square at any moment in time, given the configuration at the previous instant.
The state of a square does not depend solely on its previous state. It also depends on the states of its neighbors. Two types of neighborhoods have been defined for a CA with a grid of square cells. The Von Neumann neighbors of a cell are the four cells above it, below it, and to the left and right. The Moore neighborhood (Figure 2) consists of the Von Neumann neighbors and the four cells diagonally adjacent to a given cell.
Figure 2: Moore Neighborhood of a Dead Cell |
The GoL is defined for Moore neighborhoods. State transition rules can be defined in terms of two cases:
- Dead cells: By default, a dead cell stays dead. If a cell was dead at the previous moment, it becomes (re)born at the next instant if the number of live cells in its Moore neighborhood at the previous moment was x_{1} or x_{2} or ... or x_{n}.
- Alive Cells: By default, a live cell becomes dead. If a cell was alive at the previous moment, it remains alive if the number of live cells in its Moore neighborhood at the previous moment was y_{1} or y_{2} or ... or y_{m}.
The state transition rules for the GoL can be specified by the notation Bx/Sy. Let x be the concatenation of the numbers x_{1}, x_{2}, ..., x_{n}. Let y be the concatenation of y_{1}, y_{2}, ..., y_{m}. The GoL is B3/S23. In other words, if exactly three of the neighbors of a dead cell are alive, it becomes alive for the next time step. If exactly two or or three of the neighbors of a live cell are alive, it remains alive at the next time step. Otherwise a dead cell remains dead, and a live cell becomes dead.
The GoL is an example of recreational mathematics. Starting with random patterns, one can predict, roughly, the distributions of certain patterns when the CA settles down, in some sense. On the other hand, the specific patterns that emerge can only be found by iterating through the GoL, step by step. And one can engineer certain patterns.
3.0 Life-Like Celluar AutomataFor the purposes of this post, a life-like CA is a CA defined with:
- A two dimensional grid with square cells and discrete time
- Two states for each cell
- State transition rules specified for Moore neighborhoods
- State transition rules that can be specified by the Bx/Sy notation.
How many life-like CA are there? This is the question that this post attempts to answer.
The Moore neighborhood of cell contains eight cells. Thus, for each of the digits 0, 1, 2, 3, 4, 5, 6, 7, and 8, they can appear in Bx. For each digit, one has two choices. Either it appears in the birth rule or it does not. Thus, there are 2^{9} birth rules.
The same logic applies to survival rules. There are 2^{9} survival rules.
Each birth rule can be combined with any survival rule. So there are:
2^{9} 2^{9} = 2^{18}
life-like CA. But this number is too large. I am double counting, in some sense.
4.0 Reversing Figure and GroundFigure 1 shows, side by side, grids from the GoL and from a CA called Flip Life. Flip Life is specified as B0123478/S01234678. Figure 3 shows a window from a computer program. In the window on the left, the rules for the GoL are specified. The window on the right is used to specify Flip Life.
Figure 3: Rules for Life and Flip Life |
Flip Life basically renames the states in the GoL. Cells that are called dead in the GoL are said to be alive in Flip Life. And cells that are alive in the GoL are dead in Flip Life. In counting the number of life-like CA, one should not count Flip Life separately from the GoL. In some sense, they are the same CA.
More generally, suppose Bx/Sy specifies a life-like CA, and let Bu/Sv be the life-like CA in which figure and ground are reversed.
- For each digit x_{i} in x, 8 - x_{i} is not in v, and vice versa.
- For each digit y_{j} in y, 8 - y_{j} is not in u, and vice versa.
So for any life-like CA, one can find another symmetrical CA in which dead cells become alive and vice versa.
5.0 Self Symmetrical CAsOne cannot just divide 2^{18} by two to find the number of life-like CA, up to symmetry. Some rules define CA that are the same CA, when one reverses figure and ground. As an example, Figure 4 presents a screen snapshot for the CA called Day and Night, specified by the rule B1/S7.
Figure 4: Day and Night: An Example of a Self-Symmetrical Cellular Automaton |
Given rules for births, one can figure out what the rules must be for survival for the CA to be self-symmetrical. Thus, there are as many self-symmetrical life-like CAs as there are rules for births.
6.0 CombinatoricsI bring all of the above together in this section. Table 1 shows a tabulation of the number of life-like CAs, up to symmetry.
Number | |
Birth Rules | 2^{9} |
Survival Rules | 2^{9} |
Life-Like Rules | 2^{9} 2^{9} = 262,144 |
Self-Symmetric Rules | 2^{9} |
Non-Self-Symmetric Rules | 2^{9}(2^{9} - 1) |
Without Symmetric Rules | 2^{8}(2^{9} - 1) |
With Self-Symmetric Rules Added Back | 2^{8}(2^{9} + 1) = 131,328 |
7.0 Conclusion
How many of these 131,328 life-like CA are interesting? Answering this question requires some definition of what makes a CA interesting. It also requires some means of determining if some CA is in the set so defined. Some CAs are clearly not interesting. For example, consider a CA in which all cells eventually die off, leaving an empty grid. Or consider a CA in which, starting with a random grid, the grid remains random for all time, with no defined patterns ever forming. Somewhat more interesting would be a CA in which patterns grow like a crystal, repeating and duplicating. But perhaps an interesting definition of an interesting CA would be one that can simulate a Turing machine and thus may compute any computable function. The GoT happens to be Turing complete.
Acknowledgements: I started with version 1.5 of Edwin Martin's implementation, in Java, of John Conway's Game of Life. I have modified this implementation in several ways.
References- David Eppstein (2010). Growth and Decay in Life-Like Celluar Automata
Saturday, May 13, 2017
Innovation and Input-Output Matrices
Figure 1: National Income and Product Accounts |
This post contains some speculation about technical progress.
2.0 Non-Random Innovations and Almost Straight Wage CurvesThe theory of the production of commodities by means of commodities imposes one restriction on wage-rate of profits curves: They should be downward-sloping. They can be of any convexity. They are high-order polynomials, where the order depends on the number of produced commodities. So no reason exists why they should not change convexity many times in the first quadrant, where the the rate of profits is positive and below the maximum range of profits. The theory of the choice of technique suggests that, if multiple processes are available for producing many commodities, many techniques will contribute to part of the wage-rate of profits frontier.
The empirical research does not show this. When I looked at all countries or regions in the world, I found very little visual deviation from straight lines for most wage curves, for the ruling technique^{1}. The exceptions tended to be undeveloped countries. Han and Schefold, in their empirical search for capital-theoretic paradoxes in OECD countries, also found mostly straight curves. And only a few techniques appeared on the frontier.
I have a qualitative explanation of this discrepancy between expectations from theory and empirical results. The theory I draw on above takes technology as given. It is as if economies are analyzed based on an instantaneous snapshot. But technology evolves as a dynamic process. The flows among industries and final demands have been built up over decades, if not centuries.
In advanced economies, technology does not change randomly. Large corporations have Research and Development departments, universities form extensive networks, and the government sponsors efforts to advance Technology Readiness Levels^{2}. Sponsored research is not directed randomly. Technical feasibility is an issue, albeit that changes over time. Another concern is what is costly at the moment, with cost being defined widely. I suggest a constant effort to lower a reliance on high cost inputs in production process, over time, results in coefficients of production being lowered such that wage curves become more straight^{3}.
The above story suggests that one should develop some mathematical theorems. I am aware of two areas of research in Sraffian economics that seem promising for further inquiry along these lines. First, consider Luigi Pasinetti's structural economic dynamics. I have an analysis of hardware and software costs in computer systems, which might be suggestive. Second, Bertram Schefold has been analyzing the relationship between the shape of wage curves; random matrices; and eigenvalues, including eigenvalues other than the Perron-Frobenius root.
3.0 Innovations Dividing Columns in Input-Output Table, Not Adjoining Completely New OnesI have been moping during my day job how I cannot keep up with some of my fellow software developers. I return to, say, Java programming after a few years, and there is a whole new set of tools. And yet, much of what I have learned did not even exist when I received either of my college degrees. For example, creating an Android app in Android Studio or IntelliJ involves, minimally, XML, Java, and Virtual Machines for testing. Back in the 1980s, I saw some presentations from Marvin Zelkowitz for what might be described as an Integrated Development Environment (IDE). He had an editor that understood Pascal syntax, suggested statement completions, and, if I recall correctly, could be used to set breakpoints and examine states for executing code. I do not know how this work fed, for example, Eclipse.
Nowadays, you can specialize in developing web apps^{4}. Some of my co-workers are Certified Information Systems Security Professionals (CISSPs). They know a lot of concepts that are sort of orthogonal to programming^{5}. I also know people that work at Security Operations Centers (SOCs)^{6}. And there are many other software specialities.
In short, software should no longer be considered a single industry. Glancing quickly at the web site for the Bureau of Economic Analysis, I note the following industries in the 2007 benchmark input-output tables:
- Software publishers (511200)
- Data processing, hosting, and related services (518200)
- Internet publishing and broadcasting and Web search portals (518200)
- Custom computer programming services (541511)
- Computer systems design services (541512)
- Other computer related services, including facilities management (54151A)
Coders, programmers, and software engineers definitely provide labor inputs in many other industries. Cybersecurity does not even appear above.
What would input-tables looked like, for software, in the 1970s? I speculate you might find industries for the manufacture of computers, telecommunication equipment, and satellites & space vehicles. And data processing would probably be an industry.
I am thinking that new industries come about, in modern economies, more by division and greater articulation of existing industries, not by suddenly creating completely new products. And this can be seen in divisions and movements in industries in National Income and Product Accounts (NIPA). One might explore innovation over the last half-century or so by looking at the evolution of industry taxonomies in the NIPA.^{7}.
4.0 ConclusionThis post suggests some research directions^{8}. At this point, I do not intend to pursue either.
Footnotes- Reviewers, several years ago, had three major objections to this paper. One was that I had to offer some suggestion why wage curves should be so straight. The other two were that I needed to offer a more comprehensive explanation of how to map from the raw data to the input-output tables I used and that I had to account for fixed capital and depreciation.
- John Kenneth Galbraith's The New Industrial State is a somewhat dated analysis of these themes.
- They also move outward.
- The web is not old. Tools like Glassfish, Tomcat, and JBoss, and their commercial competitors are neat.
- Such as Confidentiality, Integrity, and Availability; two-factor identification; Role-Based Access Control; taxonomies for vulnerabilities and intrusions; Public Key Infrastructure; symmetric and non-symmetric encryption; the Risk Management Framework (RMF) for Information Assurance (IA) Certification and Accreditation; and on and on.
- A SOC differs from a Network Operations Center. Operators of a SOC have to know about host-based and network-based Intrusion Detection, Security Incident and Event Management (SIEM) systems, Situation Awareness, forensics, and so on.
- One should be aware that part of the growth on the tracking of industries might be because computer technology has evolved. Von Neumann worried about numerical methods for calculating matrix inverses. Much bigger matrices are practical now.
- I do not think my ideas in Section 3 are expressed well.
Saturday, May 06, 2017
Distribution of Maximum Rate of Profits in Simulation
Figure 1: Blowup of Distribution of Maximum Rate of Profits |
This post extends the results from my last post. I think of the results presented here as providing information about the implementation of my simulation. I do not claim any implications about actually existing economies. I did not have any definite anticipations about what I would see. I suppose it could be of interest to regenerate these results where coefficients of production are randomly generated from some non-uniform distribution.
I continue to use a capability to generate a random economy, where such an economy is characterized by a single technique. A technique is specified by a row vector of labor coefficients and a corresponding square Leontief input-output matrix. The labor coefficients are randomly generated from a uniform distribution on (0.0, 1.0]. Each coefficient in the Leontief input-output matrix is randomly generated from a uniform distribution on [0.0, 1.0). The random number generator is as provided by the class java.util.Random, in the Java programming language. I am running Java version 1.8.
Each random economy is tested for viability. Non-viable economies are discarded. Table 1 shows how many economies needed to be generated, given the number of produced commodities, to end up with a sample size of 300 viable economies. The maximum rate of profits is calculated for each viable economy. The maximum rate of profits occurs when the wage is zero, and the workers live on air. Thus, labor coefficients do not matter for the calculation of the maximum rate of profits.
Seed for Random Generator | Number of Commodities |
Number of Economies |
368,424,234 | 2 | 610 |
345,657 | 3 | 6,124 |
4,566,843 | 4 | 826,471 |
547,527 | 5 | > 2^{31} - 1 |
I looked at the distribution of the maximum rate of profits, calculated as a percentage, in several ways. Figure 2 presents four histograms, superimposed on one another. Figure 1 expands the left tails of these histograms. I suppose Figure 2 is somewhat easier to make sense of than Figure 1, when you click on the image. Maybe the statistics in Tables 2 and 3 are clearer. One can see, for example, in random economies in which two commodities are produced, the mean of the maximum rate of profits is 43.9%. The minimum, in these 300 random economies, of the maximum rate of profits is about 0.03% and the maximum is 318%. If I wanted to be more thorough, I would have to review how skewness and kurtosis are calculated by default in the Java class org.apache.commons.math3.stat.descriptive.DescriptiveStatistics. The coefficient of variation is the ratio of the standard deviation to the mean. The nonparametric analogy, reported in the last row in Table 3, is the ratio of the Inter-Quartile Range to the median. Anyways, the distribution of the maximum rate of profits, in random viable economies generated by the simulation, is non-Gaussian and highly skewed, with a tail extending to the right.
Figure 2: Distribution of Maximum Rate of Profits |
Number of Produced Commodities | |||||
Two | Three | Four | Five | ||
Sample Size | 300 | 300 | 300 | 300 | |
Mean | 43.9 | 15.7 | 8.28 | 4.95 | |
Std. Dev. | 50.2 | 19.3 | 7.53 | 5.90 | |
Skewness | 2.10 | 3.89 | 1.22 | 2.63 | |
Kurtosis | 5.14 | 22.2 | 0.882 | 9.64 | |
Coef. of Var. | 0.875 | 0.811 | 1.10 | 0.839 |
Number of Produced Commodities | |||||
Two | Three | Four | Five | ||
Minimum | 0.0327 | 0.113 | 0.0107 | 0.00405 | |
1st Quartile | 9.35 | 4.51 | 2.52 | 1.17 | |
Median | 25.3 | 9.72 | 5.70 | 2.99 | |
3rd Quartile | 57.3 | 19.9 | 11.3 | 6.27 | |
Maximum | 318 | 168 | 36.2 | 44.2 | |
IQR/Median | 1.90 | 1.58 | 1.54 | 1.70 |
With the simulation, the maximum rate of profits tends to be smaller, the more commodities are produced. I wish I could extend these results to a lot more produced commodities. National Income and Product Accounts (NIPAs), at the grossest level of aggregation have on the order of 100 produced commodities. Even if results with the assumption of an arbitrary probability distribution for coefficients of production could be directly applied empirically, one would like confirmation that trends seen with a very small number of produced commodities continue.
Wednesday, May 03, 2017
I Just Simulated 6 Billion Random Economies
Figure 1: Probability a Random Economy Will Be Viable |
I have begun working towards replicating certain simulation results reported by Stefano Zambelli's.
At this point, I have implemented a capability to generate a random economy, where such an economy is characterized by a single technique. A technique is specified by a row vector of labor coefficients and a corresponding square Leontief input-output matrix. The labor coefficients are randomly generated from a uniform distribution on (0.0, 1.0]. Each coefficient in the Leontief input-output matrix is randomly generated from a uniform distribution on [0.0, 1.0). The random number generator is as provided by the class java.util.Random, in the Java programming language. I am running Java version 1.8.
A Monte Carlo simulation, in the results reported here, tests each random economy for viability, where the technique, for each economy, is used to produce a specified number of commodities. A viable economy can reproduce the inputs used up in producing the outputs. If the economy is just viable, nothing is left over to pay the workers and the capitalists. The Hawkins-Simon condition can be used to check for viability.
Table 1 reports the results. The number of Monte Carlo runs, for each row, is 1,000,000,000. The seed is reported so I can replicate my results, if I want. I think I can provide a symmetry argument for why the probability for the first row should be 1/2. I reran the simulation for the last row with 2,000,000,000 runs and the same seed. I still found zero viable economies.
Seed for Random Generator | Number of Commodities |
Number of Viable Economies | Probability |
46,576,889 | 2 | 499,967,476 | 49.9967476% |
89,058,538 | 3 | 50,198,690 | 5.019869% |
7,586,338 | 4 | 372,339 | 0.0372339 |
784,054 | 5 | 99 | 0.0000099% |
568,233,269 | 6 | 0 | 0% |
Zambelli suggests randomly specifying a rescaled output, in some sense, for the technology so as to ensure viability. I have a rough conceptual understanding of this step, but I need a better understanding to reduce it to source code. I think I'll go on to further analyses before revisiting the issue of viability. The above results certainly suggest that my analyses will be limited, in the mean time, to economies that produce only two, three, or maybe four commodities.
I think that Zambelli's approach is worthwhile for pursuing the results in which he is interested. One limitation arises with applying a probability distribution to one particular description of technology. In practice, coefficients of production evolve in a non-random manner. Pasinetti's structural dynamics is a good way of exploring technical progress in the tradition of Sraffa.
References- Stefano Zambelli (2004). The 40% neoclassical aggregate theory of production. Cambridge Journal of Economics 28(1): pp. 99-120.