Andy Blunden September 2006

In 1967-1970 I did my PhD on the mathematical tools used to represent “non-stationary stochastic processes” affecting building structures. What was involved was earthquakes and severe windstorms hitting structures, or rather, planned structures, and so to be able to predict what would happen to these structures, for the purpose of designing them to withstand earthquakes and windstorms.

The point was that the behaviour of a structure subject to any given dynamic loading was already easily predictable, but the natural forces which generate those forces remained intractable. My interest was not in predicting earthquakes and wind pressures, but in the mathematical instruments used to represent the statistical and dynamic properties of the rapidly changing patterns of acceleration or pressure generated during the relevant events. These events are inherently random, that is, one would never know exactly how the ground would move under a building in a yet-to-occur earthquake. But nevertheless, large numbers of accelerograms had been collected from all the major earthquakes of the preceding 30 years or so, and these captured the statistical and dynamic form of the possible processes (and likewise the traces recorded on wind recorders). The point was to devise mathematical expressions which could represent the dynamic and statistical properties of these processes, and which could allow the transformation of the process brought about by its effect on a known structure to be calculated, and consequently, a statistical expression found for the response of well-defined structures.

At the very same time that I was engaged in this work – 1968/69 – I had become interested in Marxism, and I was intensely interested in matters of historical and economic change, and all the parallels between what I was studying and economic theory were fully present for me. It so happened, that for some of the time, I shared a study area at University College London with a PhD student of econometrics who was applying the current theory of economic predicting to filling the gaps in the records of economic history by extending the current methods of economic prediction backwards into earlier historical time. So I was blessed through this friendship with an insight into what economists took to be the state of the art in analysis of “non-stationary stochastic processes.”

“Non-stationary” means “developing.” That is, it designates a process of change in which a factor not only changes and changes in a random way, but the statistical properties exhibited in the fluctuations are themselves changing. A stationary stochastic process is on the contrary one in which the relevant quantity is continually changing in a random way, but one segment of its history looks just like any other; there is change but not development.

In my area of work, i.e., civil engineering applications, the problem had been that the mathematical apparatus for modelling *stationary* processes was simple, and easily subject to analysis; the class of stationary stochastic processes is closed with respect to linear transformations, which civil engineers generally deal with in their analysis of building response to loading processes. So, engineers simply used the mathematical tools which worked for stationary processes.

However, the relevant natural processes – earthquake and windstorms – are highly *non-stationary*, being transient and exhibiting structural change in the course of their genesis. The response of a building to a transient load is quite different from the response of the same building to a continual process acting in the same way over a period of time.

So this approach, of using the mathematics of stationary stochastic processes to represent non-stationary stochastic processes, was fatally flawed for the purposes of handling natural processes affecting well-defined structures.

You can imagine my amusement then when I learnt that the received wisdom in the social sciences was that it was OK to represent economic history by means of the same mathematical entities which are valid only with respect to stationary processes. In fact, the social scientists were quite unaware that this limitation applied in principle to the relevant class of mathematical tools; at least the engineers were aware that they were using analytical models able to represent only stationary processes to represent non-stationary processes, and just hoped that it would be OK. The economists seemed to think that if these things worked for natural science, then the natural scientists must have done the maths, so it would be OK for them to just pick up the tools and use them for economic prediction.

Learning this in 1968/69, at the moment of the break-up of the Post-WW2 Bretton Woods arrangements, and the move from post-war controlled inflation and sustained boom, into stagflation and protracted crisis – just seemed marvellously amusing to me.

I was not pushing a mathematical model that worked for natural science on to social science. I was discovering that a mathematical model which was inadequate for natural science was obviously even more inadequate for social-historical application, and yet this didn’t seem to bother economists, who evidently believed that economic indicators fluctuated, but there was no fundamental change going on underneath.

In the field of earthquake analysis, the dominant British school, at Imperial College London, bypassed the analytical problems, by simply collecting accelerograms of the top-100 earthquakes from all over the world in digital form; this set of 100 actual recordings could with some justice be deemed to fairly represent the worst that could happen to any building planned for any one specific time and place. No statistics or mathematics is required, as digital computers could calculate the response in each case. This is a perfectly fine and practical solution in the best tradition of British empiricism, but it has certain obvious limitations, and I took it upon myself to try to overcome the difficulties posed by an analytical representation of the set of possible earthquakes.

To get to the point, the tool used for these kind of representations is the Fourier transform. A Fourier transform represents a process by the phase and amplitude of its harmonic components. Any determinate process whatsoever, with a finite number of values in its series (e.g. 120 values at intervals of 1/10 second for 12 seconds) can be represented by its Fourier transform with the same number of harmonic components, even if the process is not oscillatory, let alone stationary, by nature. The hypothesis on which the existing methods rested was to suppose that any given realisation of the process is generated by randomly determining the amplitude and (most importantly) the phase of each of these (say 120) harmonic components. If you start with a highly *non-stationary* process (e.g. the price of oil at monthly intervals 1968-1978), and then reproduce a realisation of the same process on the basis of this assumption, what is realised is a *stationary* process, in which the relevant value fluctuates up and down indefinitely in a constant manner.

This obvious failure in representation happens because the realisation or regeneration of the process from its Fourier transform is executed on the assumption that the phase and sign of every harmonic component can be determined *independently*. But it turned out that the *stationarity* of the process rests on the *statistical independence* of the various harmonic components. If on the contrary, the measured process is reproduced on the basis of *correlations* between the various harmonic components as determined by observation, then a non-stationary process is realised, with not only the statistical properties of the original process being ‘modelled’, but also its development in time.

So, the non-stationary or transient nature of processes which exhibit the combined action of a number of component processes, each of which have some kind of oscillatory or periodic nature, rests on the co-determination of the phase and strength of each of the other component processes. Overlook this mutual codetermination, and the whole nature of the process is lost. Select the phase of each component independently, and a non-stationary process is transformed into a stationary process and the development is simply overlooked.

Fourier transforms can be applied to any process whatsoever. So far as Fourier is concerned *n* measurements is just *n* degrees of freedom; it doesn’t matter with they are oscillatory or not. When reproduced, the process will endlessly repeat itself every *n* points in the series. Empirical analysis of complex processes to determine harmonic components is meaningless except insofar as the actual causal basis of the harmonic components is understood and captured in the analysis.

The reader will understand the scepticism with which I greeted Ernest Mandel’s work around this time, concerned with predicting future turns in the development of the capitalist crisis in terms of 50 year cycles.

Periodicity is manifested in a process because of the interaction of opposing tendencies between which there is some kind of ‘misclosure’ – a ‘disturbing’ force generates a countervailing force towards restoration of equilibrium, but there is a time-lag between the disturbance and the restorative response which always ‘overshoots’; this sets up the oscillation. The time lag between the cause and effect processes is reproduced in the periodicity of the fluctuation of the dominance of the one or the other. All ‘elastic’ processes and systems exhibit oscillation and are stabilised in this way.

For a given process to manifest the same periodicity over numerous cycles presupposes that the conditions underlying the interaction between the opposing forces is constant. By its very nature then, this kind of analysis is incapable of predicting system breakdowns, historical and transient crises and transformations. They are based on an assumption of *stationarity*.

In my field, some people tried to overcome the inherent stationarity of the Fourier Series by using instead composites whose components were themselves non-stationary, such as damped oscillations. The sum of nonstationary values is also nonstationary, and the approach has the attraction of allowing for some causal rationale for the components. The problem with all these methods is that class of processes from which the components are taken is *not closed* to linear transformations. Consequently, such methods are useless for analysis. In economic theory, the idea of adding cyclical process on top of a curve representing a moving equilibrium point has the same effect: it allows you to do some curve fitting, but the process if meaningless from the point of view of analysis.

As a result of my experiences as described above I developed a firm prejudice against Kondratiev cycles, let alone supposed “world-system” periodicities supposed to extend over thousands of years.