Archive for June, 2008

What is the Value of the US Housing Stock?

Tuesday, June 24th, 2008

One of the recurring problems in analyzing the U.S. housing market is that there is no central registry of houses that would allow a researcher to calculate the value of all houses and examine the mortgages associated with them, analogous to the databases available for public corporations such as Compustat. Instead, housing researchers must gather their data from different sources and the numbers frequently provide modestly different pictures depending upon how data are compiled.

For the value of homes in the U.S. there are two primary sources: the American Housing Survey from the Census Department and the Flow of Funds report from the Federal Reserve Board.

From the American Housing Survey (table 1A-7 on page 10), we see that the median price of the 75 million privately owned US homes was $165,300 in 2005, meaning that half the houses were worth less than $165,000 and half more (some, including this one, very much more). Approximately 22 million houses were worth less than $100,000, 22 million between $100,000 and $200,000, 12 million $200,000 to $300,000 and 19 million more than $300,000.

The Federal Reserve Board Flow of Funds Accounts of the US (Table B100, line 4, Real Estate owned by Households) estimates that total household real estate was worth $18,700 billion in 2005 (and $20,150 billion in fourth quarter of 2007, the latest data); allowing for slight differences in measurement, the average home value in 2005 was around $250,000, up to around $270,000 in 2007.

For comparison, US GDP (see table 9 of the latest GDP report here) was $12,430 billion in 2005 (and $14,070 billion in the fourth quarter of 2007), meaning that the value of private homes is around 1.4 to 1.5 times the annual GDP of the country. I am never quite sure why this particular comparison fascinates so many market analysts, because it is very difficult for me to understand how to compare the value of a long-lived asset (like a house) to the amount of income generated in a country in a year.

How Many Houses are there in the US?

Tuesday, June 24th, 2008

The US, a country of around 301 million people has around 125 million houses [all housing data come from the American Housing Survey of the United States, published by the US Department of Housing and Urban Development every two years, most recently in August 2006, available here].

Of these, around 109 million houses are occupied year-round, 75 million by owners and 34 million by renters. Half of the houses were built before 1973; 32 million are in central cities and 52 million in suburbs. 76 million are single family and 6 million condominiums. 85% of US homes have air conditioning, 48% have a separate dining room and 34% have a working fireplace. Approximately 2 million houses were built in 2005 and 2006 and about 1.5 million in 2007 (see here for data on housing permits, starts and completions).

Government Policy Toward Homeownership

Tuesday, June 24th, 2008

Since the 1920s the US government has encouraged homeownership over renting. Franklin D. Roosevelt said that “a nation of homeowners is unconquerable”. There is a large literature [See article here] that argues that homeownership is associated with substantial economic and social benefits, including more education, less crime and better health. This has resulted in a large amount of policy energy dedicated toward expanding homeownership from less than 50% of the population in the 1940s to almost 70% in the last few years.

The nature of government support for homeowners takes many forms. A major financial incentive is the tax deductibility of interest rate payments for most mortgages. The traditional US mortgage is a 30-year fixed rate loan with constant payments. The first few years of payments for such a mortgage will be largely interest and only a small amount of principal; since mortgage interest is tax deductible, the typical home owner (whose income is largely in the form of salary from which taxes are withheld) will receive a significant refund when he files his taxes the next year. While there has been much discussion of a revision of the US tax code in recent years, very few politicians are willing to challenge the mortgage interest deduction.

Beyond the tax deduction, the government provides strong support for the mortgage market.

  • Ginnie Mae (GNMA), Fannie Mae (FNMA) and Freddie Mac (FHLMC) are government sponsored enterprises that reduce the cost of mortgages to homeowners somewhat. They are private corporations (owned by shareholders) that provide services to allow companies that make loans to home buyers a way to resell them to institutional investors. Their government sponsorship is interpreted by some as implying that the government might help them if they got into trouble, but this is unclear.
  • The Federal Housing Administration (FHA), is a government agency that among other things is the largest insurer of mortgages in the world.
  • Federal Home Loan Banks (FHLB) are a system similar to the Fed, of 12 regional banks (government sponsored enterprises owned by local banks) that provide liquidity to mortgage lenders.

While the government does not spend a large part of its budget on housing (a number of the programs noted above typically earn a profit or break even, although the source of the profitability is partially due to their use of the government’s ability to borrow money cheaply), there are important resources used to increase the rate of homeownership (and more recently, to help some of those who took out loans that they cannot afford to pay back).

One lesson of the recent housing crisis is the asymmetric nature of the risks taken by intermediaries in the mortgage market. Private institutions (such as banks) that provided services similar to those that the above agencies made modest profits when homeowners were paying their loans but faced huge losses once homeowners stopped paying their loans. The cost to taxpayers for the implied backing of the government for the programs listed above is virtually zero in good times, but if there is ever a serious housing crisis in the US it could end up being quite large (note that the current housing crisis is fairly modest by the standards of the Great Depression of the 1930s).

Understanding the US Housing Market and the Credit Crisis

Tuesday, June 24th, 2008

These essays are designed to help you understand the US real estate and credit crisis. Virtually every other day there is a report that markets are down due to credit worries about the US real estate crisis [see, among many, this article, which blames high oil prices in part on the credit crisis caused by subprime debt]. While it is difficult to directly profit on this crisis (although a hedge fund that aggressively shorted subprime debt did earn 1000% in 2007 [see this article]), by understanding the structure and nature of the housing market and the nature of the problems that have occurred you can make better investment decisions. You will learn the difference between the what we believe are two aspects of the housing crisis, the boom in home construction (too many houses were built between 2001 and 2005) and the credit crisis (too many loans were made to borrowers who cannot pay back their mortgages between 2004 and 2006).

The essays in the U.S. housing market section of this site should provide a good overview of the situation. The first set provides a picture of the US housing market. The government has a larger role in housing than most other markets in the US (except perhaps agriculture), and the nature of that role as well as its consequences are analyzed. To understand the size of the subprime crisis you must first understand the size of the housing market relative to the population, the value of those houses and how they are financed. The US mortgage market has its own peculiarities, so there is a discussion of the types of financing that are used and the importance of the credit rating (and a precise definition of the word subprime); finally there are discussions of the types of credit available to borrowers who have poor credit ratings and a description of what happens when a borrower stops paying his mortgage.

The second set of topics provides a brief overview of the crisis, starting with the housing boom in 2001 through the credit bubble and the recent credit worries. There is a description of securitization, the process by which an individual mortgage is bundled with others, packaged and sold to institutional investors. The problems that developed in this market within the last year are then discussed, and finally there is a discussion of the actual impact of the mortgage crisis in interbank markets, and a way for you to see how markets are functioning.

Understanding US Inflation Data: CPI

Tuesday, June 24th, 2008

[If you would like a brief review of ways to measure inflation and the most important U.S. monthly inflation indices you should read Understanding Inflation Data: An Introduction first and then return to this article that is about the CPI, the consumer price index]

The CPI or the Consumer Price Index is published by the Bureau of Labor Statistics (the BLS) an independent agency of the U.S. federal government. The BLS publishes CPI data in the middle of the month after the data were collected (data for June 2008 will be released on July 16; see here for the release schedule).

The CPI-U index (often referred to as the CPI or headline inflation number; there is much information here) measures prices for the average urban household, representing about 87% of US consumers (the index does not measure prices paid by people living on farms or in rural areas).

The BLS calculates the basket of goods consumed by urban residents conducting interviews of around 30,000 households; they include about 200 categories of goods including a sample of 80,000 individual items. The weights of these items in the basket is now adjusted every two years. There is a fair amount of controversy about the categories contained in the sample (in particular concerning how the price of housing is measured, see here for the detailed explanation by the BLS; the issue is that the BLS tries to compute how much it costs to live in a house but tries to exclude the gains and losses experienced by homeowners as investors in housing), but in general the BLS chooses certain items, such as a 4.4 pound bag of golden delicious fancy grade apples to represent the apples category and then adds all the prices together (divided by their weight in the consumption basket) to arrive at a price index.

As technology changes it becomes very difficult to measure the market basket. New products are introduced every year (how much did an iPhone cost 2o years ago? see here for a nice comparison of a 2008 model iPhone and a state of the art 1988 Motorola DynaTAC 8000X that cost $4000 when first released) and the BLS must change the basket of goods frequently to keep up with consumers.

In 1996 the Boskin Commission report found that the CPI overstated inflation by about 1.1% due to biases related to substitution, new goods and quality change. At the time there was general consensus that the CPI overstated inflation, but much disagreement about the magnitude. Substitution bias means that when the price of one good (say Coca Cola) rises, consumers will buy more Pepsi Cola or other similar soft drinks without being much worse off; an index that assumes that the basket does not change will overstate inflation. New goods and quality change speak to the problem of measuring the price difference between the latest iPhone and its predecessors.

Other studies found a bit less bias but still believed that CPI was too high (see here for a review of later studies). More recently critics of the CPI such as Bill Gross (see here and here) have argued that the BLS overcorrected and now the CPI understates inflation; but there are others (see here) who believe that the CPI fairly accurately reflects the inflation picture.

It is important to remember that the CPI is an attempt at measuring the level of prices for an urban consumer and that the composition of the urban consumption bundles has changed dramatically in the last 40 years. There has been an explosion in the number of new products; the number of products in supermarkets has increased from around 8,000 in 1970 (see here) to around 45,000 today (see here) and the internet allows consumers access to a much wider set of retailers. While the CPI might accurately capture the average prices paid by urban consumers, differences in consumption bundles have certainly increased. Further, consumers are especially aware of food and gasoline prices and seem less aware of service prices (how much are you paying per channel for cable/satellite TV versus a few years ago? The price has probably gone up but so has the number of channels) . Complaints about the accuracy of the CPI are likely to be greatest when (as is the case now) food and energy prices are rising rapidly. It is probably safe to conclude that complaints about the CPI’s accuracy are likely to increase over time along with the diversity of consumer bundles purchased.

Understanding US Inflation Data: An Introduction

Tuesday, June 24th, 2008

Measuring inflation is more complicated than it would seem at first. It is fairly easy to view the price of gasoline from the big signs at many major intersections. Average gas prices are almost double the level of a year ago (see here for US retail gasoline prices); many Americans buy gas regularly and are aware of the price rise. The prices of some other goods have also risen sharply, from vegetable oils and rice and wheat to cable television and health insurance. If all these prices are rising (as well as others like college education and airfare) then it seems hard to believe the government’s published numbers that claim that overall prices (as measured by the Consumer Price Index) were up 4.2% in the last 12 months; the claim that core prices (excluding food and energy) were up only 2.3% is likely to be treated derisively by those who believe that the price of the things that went up from the inflation calculation (see here for one of the more carefully thought out expressions of this view).

Yet official inflation series present fairly low estimates of inflation, raising speculation about secret government efforts to understate the inflation rate (see here for a fairly mainstream version of this argument by a famed bond fund manager; articles such as this one from Bloomberg News use slightly more provocative language).

To better follow these arguments I will first discuss some of the problems of measuring inflation. Future topics will include a review of several popular monthly inflation indices, the CPI, PPI and the PCE deflator.

First a bit of definition: a price index tries to capture the level of prices at a specific time; the change in the level of prices is called inflation (or deflation, if the change in prices is negative). Note that inflation and deflation are merely statements about the change in an index and not any statement about other economic conditions. Because deflation has occurred during some difficult economic times (such as the 1930s in the US or more recently in Japan) some people confuse deflation with recession or depression. But deflation and inflation are merely statements about the direction of change of the price level and nothing more.

There are numerous problems with calculating price levels and inflation; I will briefly speak of three of them, though there are others. Consider first a very simple economy with one good that does not change over time; the price level in this economy is just the price of the one good. It is very easy to compare prices over time; some years the price of this good will be higher and there will be inflation, other years it will be lower and there will be deflation.

The three major problems when we try to move from the simple economy described above to the actual economy of today arise from the observation that there are many goods (and many prices), so (1) our consumption basket changes over time, (2) the goods change over time and (3) not everyone consumes the same basket.

The people who believe that inflation is overstated argue that the price of gasoline (as noted above) has doubled in the last year. But (1) many of us consume less gasoline now than we did a year ago, driving less or using more efficient cars; (2) gasoline has changed somewhat over time, with the addition or removal of certain ingredients to increase the octane rating or reduce the negative effects of gasoline on the environment and (3) different people consume different amounts of gasoline, depending on where they live, what kinds of cars they own and how much they drive. So if the gas price is up 100%, but there is a new additive that has slightly increased the octane level and I drive much less (but still much more than you) how exactly do we compute the appropriate inflation number?

Also, many of us use more computer services than we did, say, ten years ago; some of those services are far cheaper than they were (if they even existed then). High speed internet was fairly new and expensive in the late 1990s and it is now cheaper and faster; I probably spend more each year on computer-related services (including hosting this web site) than I do on gasoline.

Without getting into the mathematics (but there is a reasonably clear explanation in Wikipedia if you are interested) there has been a lot of time devoted to trying to carefully deal with these issues but each method involves serious compromises.

As noted above, there are a number of different inflation measures (calculated by different government agencies) that are used to describe inflation; I will provide a brief overview of the different measures here with more detail in specific essays about each measure.

The CPI (consumer price index) calculated by the BLS (Bureau of Labor Statistics) is the best known U.S. price index. CPI data are published once a month and try to measure the prices paid by urban consumers for goods and services. Two numbers attract the most attention, the overall inflation rate and the “core” rate (excluding food and energy); the first number measures the price of the entire basket and the second subtracts the cost of food and energy that are typically more volatile than other prices, meaning that large changes tend to be reversed over time. Some analysts believe that recent price increases in food and energy are unlikely to be reversed and they tend to ignore the “core” rate (which is much lower than the overall inflation rate).

The PPI (Producer Price Index) is also published by the BLS. In contrast with the CPI (which tries to measure prices from the perspective of the ultimate buyer of the goods and services) the PPI measures prices from the perspective of the seller. To the degree that retailers pass through the prices that they pay for goods, changes in the PPI should be a good predictor of future changes in the CPI (if the price of fish sold by wholesalers rises, the likelihood is that stores will raise their prices to retail customers). But in recent years the link between wholesale and retail prices has been less clear, possibly due to changes in the way business is done. Financial markets appear to pay less attention to PPI than CPI (one curious note is that up until a few years ago PPI data were routinely released prior to CPI data, but now CPI data are released first; the earlier release of the PPI led some to use it as an aid in forecasting the CPI).

The PCE deflator starts with the CPI data but also incorporates other data. The Bureau of Economic Analysis (the group within the Commerce Department that calculates the GDP) calculates monthly estimates of Personal Consumption Expenditures as part of the Personal Income report. The most important difference between the two measures of inflation is that the CPI uses a fixed-weight market basket (reset every ten years or so) that assumes that consumers continue to buy the same goods even if prices change; in contrast the PCE uses a so-called “chain link” method that reflects the monthly changes in the basket. Also, the basket for the PCE is somewhat larger than the basket include in the CPI (see here for a brief description of the differences). The PCE data are reported after the CPI\ data (data for April 2008 CPI were released on May 14, data for the PCE deflator were released on May 30); the Fed has used the PCE core index (ex-food and energy) as its measure of inflation (see footnote one in this PDF document)

Understanding US Unemployment Claims Data

Tuesday, June 24th, 2008

Every Thursday, at 8:30 AM US Eastern Time, the Bureau of Labor (BLS) releases the data on initial claims for unemployment insurance (see here or here for recent reports). In the US most employers pay a small unemployment insurance tax; in general, if a worker has worked for more than a year and loses a job through no fault of his own, the worker can apply for unemployment benefits, but not all applications will result in benefits (see here for more). To continue to receive benefits the worker must be available for work (ready and willing to accept suitable work, and make a personal and continuing effort to find work). The benefits are related to the salary the worker earned, roughly half the weekly salary for most workers.

The great advantage of these data is their timeliness: every Thursday the data for the week ended the previous Saturday (five days earlier) are released, along with revisions to the previous week’s data. In contrast, the employment report is released around three weeks after the data are collected, and most other data considerably later.

The principal disadvantage of the data is their noisiness: there are many reasons for employers to fire workers that are distinct from movements in the general economy. There are strong seasonal patterns to employment (for example, there are many retail workers hired before Christmas who are not needed shortly thereafter; large industrial employers often shut down factories for maintenance or to prepare the factory for the preparation of a new product and temporarily lay off their workers; this allows employees to try to get some money from the government when employers do not need them).

Without getting into too much detail, the BLS seasonal adjustment procedure (correcting the data for typical seasonal patterns) is a bit mysterious. The seasonal factors to adjust data for the last few years through April 2009 can be found here. In early January, the seasonal factor peaks at 173, meaning that the actual number of claims is divided by 1.73 to produce the seasonally adjusted number; in September the seasonal factor of 76 means that the actual number of claims is divided by 0.76, so that actual claims of 300,000 would be seasonally adjusted up to around 395,000. Once you factor in the complexity of four day weeks (there are around 12 national holidays plus the occasional state holiday), you can see that it is hard to put too much emphasis on a single weekly number.

For this reason, some analysts prefer to look at the four-week average of the data as a way of smoothing out some of the variability of the data. There have been a number of cases in recent years when claims were up (or down) sharply for a few weeks but then returned to the range of the previous few months.

In general, in recent years when the economy is doing well the average initial weekly claims number is around 300,000; when the economy is in recession the number is closer to 400,000. An amazing part of the U.S. economy (especially relative to some European economies) is the amount of “churn” in the job market. Consider a month when the job market is poor, and employment is down by 100,000 and weekly jobless claims are 400,000. Rough calculations would imply that 1.6 million people lost their jobs (4 weeks of 400,000) and 1.5 million people found jobs (resulting in net jobs decreasing by 100,000). So even in a bad month, over 1.5 million Americans are hired for a new job. The dynamism of the U.S. economy means that firms fire workers on a regular basis, but there are usually other firms that are hiring.

Along with initial claims the BLS also releases continuing claims data. A worker who loses his job files an initial claim for unemployment insurance once, but may continue to receive benefits for months before he finds another job. To continue to receive benefits an worker must file regular reports (see here under continued eligibility). In general, during good economic times unemployed workers will find jobs quickly and stop receiving benefits; analysts thus view a rising continuing claims number as a signal that the economy is weak. But you must be careful because there are limits for how long workers can receive benefits, generally around 26 weeks (six months); after this time the worker can no longer receive benefits, even if he has not found a job. However, during difficult economic times Congress will extend this period to a longer period (39 weeks).

Understanding the US Employment Report

Tuesday, June 24th, 2008

The employment report is the single most important US economic data release. The importance of this release is due to its timeliness (the report comes out ahead of most other monthly economic data) and its scope (it is the result of a two large surveys: the household survey of 60,000 households and the establishment survey of 160,000 businesses and government agencies covering 400,000 work sites). The report is viewed by some analysts as a sort of monthly GDP number, due to the strong correlation between employment and growth.

This article will not make you an expert analyst of this report (if you want to be an expert I would suggest a year or more of graduate school in economics followed by a year or more working at the economics department of a bank or investment bank as a better start). But it will tell you a few things to look for and provide you with some context to understand what it means.

First, as was noted above, there are two surveys. One of them (the establishment survey) gets most of the attention from traders. The key number (announced, with the rest of the data at 8:30 am Eastern Time on the first Friday of every month and available shortly thereafter here) is the change in nonfarm employment, or the change in the number of workers as calculated by employers and adjusted by the Labor Department. There is much speculation about this number, and if it is greater than expected (and greater than the so-called “whisper” number that certain traders circulate just before the official number is released) then typically equities will rise (in general, strong economy, strong stocks) and bonds will fall (strong economy, more likely that the Fed will raise rates).

But while markets react quickly to this number, it is somewhat controversial, for several reasons: (1) the imprecision of the initial estimate; (2) problems in counting new businesses and the self-employed; (3) the timing of employment relative to the business cycle. I will now discuss the criticisms of the headline number in turn, and then discuss other parts of the report:

Imprecision: The establishment survey is revised in the two months following the initial release (as well as several more times in later years). For example, in September 2006, the initial number was +51,000, a number described as “dire” by one analyst. But in October, this number was revised to +148,000 and in November it was revised again to +203,000. Presumably the analyst viewed the revised number a bit differently. During 2006, the range of the revisions (see here for a table) from the initial announcement to the second revision was from -43,000 to +152,000. The Bureau of Labor states that a 104,000 monthly change is statistically significant. All this implies a bit more randomness in these numbers than some analysts seem to believe. A further complication is that the news from the headline number is often offset by recent revisions; if the headline number is below expectations, but the two previous months have been revised up, is that good news or bad? Thoughtful traders may conclude that markets tend to overreact a bit to these numbers.

New Business and the Self-Employed: Since the establishment survey works with existing businesses it cannot measure newly-formed businesses or count the self-employed. The Department of Labor attempts to correct for these omissions by including an adjustment to the numbers for these factors, based on a model. Analysts who believe that the economy is weak tend to argue that these adjustments overstate employment.

Timing: Many companies may attempt to keep workers even after business has turned down. It is expensive to find, hire and train workers; despite all the announced layoffs by firms who have experienced losses, the evidence suggests that many firms prefer to maintain workers on their payrolls even when demand for their products is down so that they will be prepared for the next upturn. Thus, the economy may turn down before employment actually drops. Similarly, when the economy has started to turn up, firms may be reluctant to hire workers until there is ample evidence that business is good. Economists believe that the decision to hire and fire a worker is viewed as an investment project, differently from the decision to buy supplies for the office. A business that is losing money due to a lack of customers can sell off their extra supplies that will not be needed, confident that when business is better they can buy new supplies at the going price. But a company will try to avoid firing workers because there is a significant cost to finding and training new workers.

As I have noted above, the employment report is much more than the headline number from the establishment survey. I will briefly mention two other parts of the report that attract attention, the household survey and the index of hours worked.

The household survey is used to calculate the most politically important number, the unemployment rate. The household survey asks people who are not employed whether they were available for work and have made specific efforts to find employment during the past four weeks; they further state that whether an individual is eligible for unemployment benefits does not enter into their analysis. The household measure of employment is much less accurate than the establishment survey, with a 90% confidence interval of plus or minus 430,000, so it is usually ignored by most market analysts, but the unemployment rate (particularly among certain groups like minorities or women) can be a very important number for political analysts. Fed Chairman Greenspan used to pay attention to the rate of job leavers (see Table A8), the percentage of workers who left a job voluntarily; he believed that this was an indicator of the strength of the labor market.

The index of hours worked (see Table B5) attempts to measure not just how many people are working but how much total work they are doing. The Labor Department calculates how many hours per week nonsupervisory workers are working and then creates an index. While heroic assumptions must be made to create an index (similar to the assumptions of any index, in which very different things are summed together), this is an interesting monthly indicator or the US economy. It somewhat accounts for the timing problem cited above, in that firms can adjust how much work is done (via a shorter work week) more easily than adjusting the number of workers.

Understanding the US Economic Data Schedule

Tuesday, June 24th, 2008

Here is a schedule of the major US economic numbers in the order in which they are released for May 2008. The key to understanding data releases is to see how they fit in to the overall economic picture. There are many economic calendars available on the internet for free such as here and here.

The best overall picture of the economy is the GDP (approximately the measure of all goods and services produced in the US during the calendar quarter); as you can see from the table below, the first estimate of GDP for the second quarter of 2008 (which covers the period April 1 until June 30) will be released on July 31, 2008 and then be revised twice, producing a final number on September 26. But there will be much information available to the market about the economy in May 2008 before then, starting with the unemployment claims data released on May 8.

Economists who analyze the data follow all these releases (some more than others, based on their experience and research) and form a continually changing picture. Releases in red attract more than the usual amount of attention. Alert investors will be aware of upcoming data releases that can affect the markets and plan accordingly. As this site is updated, there will be links to various specific releases such as the employment report; by clicking on these links you can learn more about the informational content of the report. There are many ways to use these data (there are many economists who find employment analyzing these data at different institutions) and there are almost as many views as to which releases are most important. What follows are one economist’s views.

Below is the release schedule for May 2008 data; while the order of data release varies a bit from one month to the next, thinking about these releases as a group should help you understand how the market learns about the economy.

The general pattern is that survey data is available first; these include surveys of businesses by regional Federal Reserves and surveys of consumers. Surveys of the labor market are also available relatively early, including weekly jobless claims (the number of workers who lost jobs who are applying for unemployment benefits, see here for my take) and the employment report (a survey of businesses and households that measures the amount of work done and the number of people who would like to work but have not found jobs; see here for a fuller description).

In general, economists pay more attention to data about what people and businesses have done (employment, retail sales and so forth) than what they say they might do (consumer and business surveys). But economists are willing to work with whatever they have and use the survey data until the actual data arrive later.

Date Release
May 8 Unemployment Claims (description)
May 15 Philadelphia Fed Survey
May 15 New York Fed Survey (Empire State)
May 16 Michigan Sentiment Index
May 27 Conference Board consumer confidence
May 30 Chicago PMI
June 2 ISM index
June 3 Auto sales
June 4 Nonmanufacturing ISM
June 6 Employment report (description)
June 11 Treasury Budget
June 12 Export and Import Prices
June 12 Retail Sales
June 13 Consumer Price Index
June 17 Producer Price Index
June 17 Housing Starts
June 17 Industrial Production
June 19 Leading Indicators
June 25 Durable Orders
June 25 New Home Sales
June 26 Existing Home Sales
June 27 Personal Income and Outlays
July 1 Construction Spending
July 2 Factory Orders
July 10 International Trade
July 12 Business Inventories
July 31 Advance 2007:III GDP
August 28 Preliminary 2007:III GDP
September 26 Final 2007:III GDP

Understanding US Economic Data

Tuesday, June 24th, 2008

US economic data, from a wide variety of sources (both government and private), are published almost every day. Market participants use this data to estimate the growth of the economy and the implications for the valuation of particular assets. Strong economic growth probably means strong corporate sales and profits, but may mean that the Fed will raise interest rates.

Economic calendars that tell you which data will be released when are widely available (see, for example, this calendar). Investors who may not necessarily analyze the data themselves can learn when data will be released, and get an idea about what the market expects. Market prices tend to be very volatile at the time of important data releases; sophisticated traders will be aware of these releases and incorporate them into their trading strategy.

This section will describe some of the most important releases in terms of how markets are affected. US Economic numbers are typically constructed from surveys of businesses and households. With a population of 300 million, it is difficult to get accurate data quickly. There are a number of government agencies (and some private companies) that calculate economic data and publish it on a regular basis.

Many economists attempt to forecast the data, and some of these forecasts are widely available. An economic calendar (such as the this one) often includes a market expectation as well as a forecast by a particular forecaster. Further, there are nonpublic forecasts, the so called “whisper numbers” that circulate among traders before an announcement.

The numbers themselves are not quite as important as how they compare to the expected number. For example, consider a typical employment report. The number that gets the most attention, the “headline number” is the number of new jobs (from the establishment survey; see the essay about the employment report for more). In a typical month, the number of new jobs will increase by around 100,000 or so. So perhaps the average market expectation is 100,000. A higher number would cause participants to think that firms are hiring more workers than normal and the economy is growing more strongly, a lower number that it is growing less strongly. Then, close to the date of release, there might be a “whisper number”, an estimate by a well-known forecaster who might have more credibility or better information, of 150,000. If the number comes below 150,000 (even if it is above 100,000) it may be then viewed as a weak number.

To further complicate matters, the employment report also revises the data of the last few months. If the current month is a bit weaker than expected, but the last few months were revised up, traders may view the current number as stronger than expected. Finally, there are many more numbers in any given report besides the headline number. The employment report contains around 20 sections filled with tables and charts (see here), and a clever analyst might find evidence in one of these sections that provides a very different view of the labor market than would be provided by the headline number alone. All this usually results in a banal article in the newspaper the next day along the lines of “Market soars on job growth” or “Market soars despite weak job growth”.

There is another important distinction, between survey data, such as the Michigan survey of consumer confidence or the Purchasing Managers Index or ISM (where individuals or businessmen indicate what they intend to do) and actual data such as retail sales or durable goods shipments (which measures what individuals or businessmen actually do). In general, survey data is easier to collect, can be released more rapidly and may indicate what will happen in the future, while actual data is hard to collect, is released more slowly and only indicates what has happened in the past. But it is important to note that people often say things on surveys that are not consistent with what they actually do. The problems of polling and forecasting elections is well known, but sometimes the same people who criticize presidential election polls put much more faith in the index of supply managers consumer confidence numbers. Survey numbers can be useful, but should be viewed as at best a crude approximation to the underlying reality.

Some might argue that this distinction is somewhat arbitrary, as the actual data are surveys themselves. The government does not calculate all retail sales in a given month, but instead estimates the amount of retail sales from a survey of some retailers. But the important point is that this is a survey of what has actually happened at those stores, and not the opinions of the store owners about how business has been.

One further issue is that some data releases are inputs into other data releases. The Department of Commerce published monthly consumption numbers and a quarterly GDP report that includes quarterly consumption (the sum of the three monthly consumption numbers). Careful data analysts can closely replicate the work that is done at the Department of Commerce and get a fairly good estimate of quarterly GDP numbers before they are officially published. Putting together the pieces of the puzzle is both an art and a science; careful consumers of economic research should be able to distinguish among those who have done the work to interpret the numbers and those who have not.

Bad Behavior has blocked 35 access attempts in the last 7 days.

FireStats icon Powered by FireStats