The Financial Times has reported that back in 2006 several CPDOs (some fairly complex newly created instruments) were given a triple A rating because of a computer glitch in Moody's software. The rating, it turned out (as many naïve investors who felt attracted by the 200 spread over Libor later discovered) was off by several notches. Not long after, S&P revealed that it had also corrected an error in its own CPDO software. Although S&P explained that this glitch did not alter whatever computations the software was supposed to carry out, it failed to calm anybody.
Which brings us to the central issue: what is a black box anyway?
The term, used commonly in computer science, refers to a computer program that receives a specific input (data) and generates an output (in this case a rating) without offering much insight as to how it works.
So much for transparency.
The danger with black boxes, as it became obvious in the CPDO case, is that it is impossible for external users of ratings to detect errors (not knowing what the black box is supposed to calculate, one cannot judge the accuracy of the results).
Anyone who has ever written software knows how easy it is to make a mistake. It is for this reason that ratings should be based on clearly articulated methodologies and not black boxes.
To be precise: the rating agencies should explain in detail two things. First, the structure of the model they are using (the formulas and equations to carry out the calculations); and second, the numerical value of the parameters used. This way, investors (at least the sceptical ones) can write their own programs and replicate the agencies' results. This sanity check, leaving aside whether investors agree with the model and assumptions used, provides a second and much-needed line of defence when dealing with complex calculations.
If that seems like a tall order, I would argue that anyone willing to invest €5m on a CPDO should have been willing to pay €10,000 to a quantitatively able undergraduate to write such code.
Black boxes also have other disadvantages: they make it difficult to identify the variables that drive the rating; they say little about the stability of such ratings; and they allow the agencies to change their methodologies without letting anyone know that they have done it (they just need to modify a few lines of code). In summary, black boxes leave everybody in the dark regarding the results they spit out.
Last April, I testified before the US Senate and I recommended eliminating the use of black boxes to determine ratings. My suggestion was based on a common belief among scientists and engineers: never trust a "result" that you don't know how to replicate. In other words, "don't just tell me what the result is, tell me how you did it, and I will check it myself." In short, trust but verify.
In any event, there was something ironic in the fact that it was Moody's that first ended up entangled in a black box mess for it was always S&P that relied most heavily on black boxes. In fact, S&P's methodology for collateralised debt obligation analysis is based on the so-called CDO evaluator. This is a piece of software known in the market as The Black Box, but which S&P says is fully transparent. Moody's, on the contrary, always took pride in being transparent, a trend that was recently, and tragically, reversed.
Finally, let us not wait for another rating fiasco to abandon the black boxes practice for good. There are already many uncertainties when dealing with structured products (assets' default probabilities, recovery values, correlation assumptions). We don't need to bring a new one, i.e. computer code surprises, into the picture. Just say No to black boxes.
The writer is managing director of RW Pressprich & Co, the fixed-income broker/dealer