Tuesday, May 20, 2008

CPDOs expose ratings flaw at Moody’s (FT)

When the craze for CPDOs erupted in financial markets in late 2006, some observers quipped that these new products were like something out of a sci-fi blockbuster.

Not only was the ingracious acronym amusingly close to C3PO, one of the hapless robots from the Star Wars movies, but also like the heroes of Star Trek the products seemed “to boldly go” where no credit product had gone before.

In a time of ever shrinking returns from investments in credit at the height of a raging bull market, early versions of these highly structured and complex deals promised to pay 200 basis points – that is 2 percentage points – over Libor, or the “risk-free” rate at which banks lend to each other. And that spread came with the top-notch triple A ratings that indicate an incredibly low probability that investors could lose their money.

To put this spread in context, triple A rated European prime mortgage backed bonds at the time typically paid less than 20bps, more than 10-times smaller than the CPDO – constant proportion debt obligation – coupon, for example.

However, the triple A ratings that Moody’s awarded to some early deals were based on a model that contained an error in its computer coding and these ratings should have been up to four notches lower, according to internal documents seen by the Financial Times. Billions of dollars could have been affected.

The very first deals from ABN Amro in August 2006, which were rated triple A by Standard & Poor’s alone, provoked huge excitement among bankers, investors, traders in the underlying credit markets and of course the media. Moody’s followed up with its first rating of an ABN Amro CPDO in late September 2006.

By December, a range of banks had copied the deal and this new kind of product had been credited by some with adding new impetus to the rally in corporate credit – a rally which meant that a number of the follow-up products could not pay the same high returns promised by the original deals.

Plenty of people thought these products sounded too good to be true.

“Once again, the rating agencies have proved that when it comes to some structured credit products, a rating is meaningless,” Janet Tavakoli, an independent consultant, told the FT in November 2006. ”All AAAs are not created equal, and this is a prime example.”

Nonetheless, some of the first deals performed very well before the credit crunch struck and investors had already unwound them early and taken profits.

Leverage is key to deals’ profitable returns

Constant proportion debt obligations (CPDOs) were originally designed to make a highly leveraged bet on the performance of corporate debt in the US and Europe, writes Paul J Davies.

The standard early deals took exposure to the main investment-grade indices of credit derivatives – the iTraxx in Europe and the CDX in the US – which covered the 125 most actively traded companies in each region.

They earned premium payments for the protection they sold against default for the companies in the two indices.

The structure was designed to pay out a high fixed return over a 10-year lifespan. This was 2 percentage points annually over Libor, or the risk-free rate, for the very first deals, but the strength in credit markets at the end of 2006 meant that later deals could only earn enough premium to promise 1-1.5 percentage points over Libor.

The deals were also designed to profit from mark-to-market gains when the indices were refreshed every six months.

A key element of the CPDO structure was the prediction that applying high leverage in the early years would build up a pool of profits that would eventually cover all remaining payments due.

ABN predicted that for the first deal in August 2006 this “cash-in” point, when all the investors’ money would be moved into very-low risk government bonds or something similar, would be hit after about seven years on average, or three years ahead of maturity. However, one of the peculiarities of these deals was that if they lost money through defaults or a broad deterioration in credit markets, the CPDO would increase the leverage it applied to try to recoup that money.

This could give the deals the appearance of a gambler “chasing losses”, in the words of Cian Chandler, an S&P analyst, in an early comment on CPDOs and how they worked.

The volatility of the values and ratings of CPDOs come from the high level of leverage they applied to their bets on the performance of credit markets (See box for explanation of deals).

Analysts at CreditSights, an independent research house, wrote in November 2006: “The strategy is very simple and surprisingly robust when modelled. However, though we cannot pinpoint exactly where the flaw in the rating methdology is, there are a number of things that give us grounds for unease.”

Beyond S&P and Moody’s, no other agency ever got comfortable with a triple A rating for CPDO structures.

Fitch Ratings and DBRS, neither of which had been engaged to rate a deal, both released studies in April 2007 saying that CPDOs did not deserve triple A ratings.

”We think the first generation of CPDO transactions are over-rated,” John Schiavetta, head of global structured credit at Derivative Fitch in New York, told the FT in April 2007.

”We think the structure is inherently sound and investment grade, but just not double A or triple A.”

The fact that there were problems at one of only two agencies involved in rating such deals is significant. Some investors who would be in the target audience for such a highly rated product have investment mandates that require ratings from two different agencies.

Regarding their own rating, S&P said: “Our model for rating CPDOs was developed independently and, like our other ratings models, was made widely available to the market.

“We continue to closely monitor the performance of these securities in light of the extreme volatility in CDS prices and may make further adjustments to our assumptions and rating opinions if we think that is appropriate.”

Also, rating agencies and their approaches to categorising and monitoring complicated structured debt of all kinds have come under intense scrutiny since the credit crunch hit.

While the credit crunch has been centred around mortgages and related products, the whole question of what a triple A rating means when applied to very different products, with very different performance characteristics has been under the spotlight.

This question over triple A ratings is especially true for highly structured credit products. “Unlike with regular corporate bonds, structured bonds are made by their rating.

“They don’t exist until they’re rated and they can’t be sold without it ... a rating gives birth to a structured bond,” says Joshua Rosner a professor of structured finance at Drexel university, and chief executive of consultants Graham Fisher.

In the case of the CPDO, the birth was a long one. It took many months for ABN to perfect a structure and methodology that would pay a high return and receive a triple A rating.

The development of the product inevitably involved many discussions with ratings agencies, although ABN and the agencies have both consistently said that such exchanges never amounted to a negotiation over the rating the product would acheive.

The mix of high rating and high return encouraged many banks to try and repeat the feat quickly.

Lehman Brothers, Merrill Lynch and Dresdner Kleinwort launched their own versions of the CPDO within months of the first ABN deals.

JP Morgan, UBS, HSBC, Bear Stearns, Barclays, Société Générale and others followed.

The huge demand from all these banks to get their own deals to market put the ratings agencies under intense pressure.

At a conference for clients in November, Moody’s executives explained to frustrated delegates that the analysts were “overwhelmed” with the volume of work.

“We are working day and night to rate CPDOs,” said Paul Mazataud, a managing director in Moody’s European structured finance division.

By the end of 2007, agencies were inundated with proposals for myriad variations on the CPDO structure, which included using a specialist asset manager instead of relying on the constantly updated indices, and basing the deals on different asset types, such as derivatives of mortgage backed bonds, for example.

Both S&P and Moody’s called a halt to rating new deals as they assessed the impact of different approaches to structuring CPDOs and ensured they were happy with their models.

It was during this time that the bug in Moody’s model for rating CPDOs was uncovered. By February 2007, documents seen by the FT show that staff were discussing the issue and the impact on ratings of fixing the error.

It was nothing more than a mathematical typo – a small glitch in a line of computer code. The impact of the “bug” Moody’s analysts discovered was, nevertheless, significant.

When the model was re-run it became clear that the CPDOs could no longer achieve triple A ratings, according to documents seen by the FT.

The results showed that early CPDOs might lose between 1.5 and 3.5 notches in the Moody’s Metric, an internal measure, which equals up to four ratings notches.

Some Moody’s analysts had concerns. With so many transactions from other banks in the rating pipeline, the code could not be left as it was. The bug was corrected.

At the same time, the documents record that Moody’s staff looked at how they could amend the methodology to help the rating.

Some of the most senior managing directors in Moody’s European structured finance division were involved in meetings to discuss the updating of the methodology for rating CPDO-like transactions in February.

The staff also looked at reducing assumptions about the future volatility of the credit markets so that Moody’s model only anticipated minor moves in credit indices over the next 10 years.

This had the effect of reducing the negative impact on the ratings of correcting the code error.

When asked to explain the reasons for the changes to the methodology, Moody’s would not comment directly, but said: “It would be inconsistent with Moody’s analytical standards and company policies to change methodologies in an effort to mask errors.”

Changes in the methodology, it was suggested, could be explained due to the fact that the agency was using daily and not monthly historical data, and so volatility assumptions would decrease.

The agency is conducting a thorough review into the matter.

No comments: