Bureau of Economic and Business Research                                                                                                                                                                                                                                                                                                    


2000  Abstracts for Working Papers


We consider an auction in which the behavior of one potential bidder departs mildly from full rationality.  We show that the presence of such an inexpert bidder can be enough to discourage all of an infinitely large population of fully rational potential bidders from entering an auction.  Furthermore, this departure from full rationality does not reduce the inexpert bidder’s payoff.

This study examines the evolution of market efficiency of the Dow Jones Industrial Average over the last 103 years.  Technological advances in information system, communication, forecasting, and trading, progress in investors’ ability to use relevant information for their trading strategies are hypothesized to raise the level of market efficiency.  Also, the relations between current period’s volatility, rate of return, trading volume and autocorrelation, as well as the effects of autocorrelation, volatility and rate of return of the previous period on autocorrelation of the current period are examined.  In addition, previous evidence of significant autocorrelation may not offer adequate information about market efficiency if the level of autocorrelation changes over time and the changes are random.  If the changes in actual autocorrelation are random, the market would still be weak form efficient in a sense that investors cannot predict the market using the estimated autocorrelation.  Various ratio, serial correlation and runs tests are performed to test first-order autocorrelation (AR 1) between daily returns on the Dow Jones Industrial Average Index over the period 1896-1998, variance ratio and runs tests are also used to test the (non)random changes in autocorrelation.  Results of the tests indicate significant autocorrelation for about a third of the 103 years and nonrandom changes in the autocorrelation, reveal some pattern of the evolution of market efficiency.  Then regression analyses using the estimated autocorrelation as dependent variables are conducted to analyze the evolution of the market efficiency and to estimate the effect of relevant factors on the level of autocorrelation.  The regression analyses demonstrate that volatility, rate of return, and trading volume of the previous year have stronger negative relations with the level of autocorrelation than that of the current year’s, and that the previous year’s level of autocorrelation have a significant positive relation with the current year’s level of autocorrelation.  These findings imply that investors consider the previous year’s stock return behavior in determining their trading strategies. This study has also found that positive autocorrelation occurs more frequently in the periods of higher autocorrelation while negative autocorrelation occurs more frequently in the periods of lower autocorrelation.  Furthermore,  negative autocorrelation is more related with higher volatility or market over reaction is more frequently related with higher volatility.

We develop a spatial model in which we endogenize both the pricing of ATM services by banks and the choice of home bank and ATM use by consumers.  The equilibrium delivers the empirical regularities:   Banks set high bank account fees for their own customers, but do not charge them for ATM usage; in contrast, banks charge high ATM fees for non-members users, fees that exceed those levels that would maximize ATM revenues from non-members; and larger banks set higher account fees and demand higher surcharges for ATM use than smaller banks.  Paradoxically, we find that (i) A bank’s ATM revenues may fall short of its costs of ATM provision; and (ii) Prohibiting banks from surcharging non-members, by forcing banks to charge members and non-members the same ATM price, leads to higher ATM prices, greater bank profits, and reduced consumer welfare.

The Gini coefficient is a downwardly biased measure of inequality in small populations when income is generated by one of three common distributions.  The paper discusses the sources of bias and argues that this property is far more general.  This has implications for (i) the comparison of inequality among sub-samples, some of which may be small, and (ii) the use of the Gini in measuring firm size inequality in markets with a small number of firms.  The small sample bias has often lead to mis-perceptions about trends in industry concentration.  A small sample adjustment results in a reduced bias which can no longer be signed as positive or negative.  Finally, an empirical example illustrates the importance of using the adjusted Gini.  In this example it is shown that, controlling for market characteristics, larger shipping cartels include a stochastically identical (in terms of relative size) set of firms as smaller shipping cartels.

This paper provides an explanation for the common observation that higher income neighborhoods typically receive better public services than lower income neighborhoods.  Intuitively, one might expect that lower income groups, which typically form the voting majority of cities, would object to an unfair allocation of this nature.  Wealthy individuals, however, have the option of moving to the suburbs.  As we learn from the tax competition literature, mobile factors are generally able to command a premium.  Since institutional constraints prevent regressive taxation, and public goods are by definition consumed in equal quantity all agents, only public services remain as an instrument for municipalities to use to keep wealthy agents in their tax base.  We show that both rich and poor agents benefits from this differential access to public service and explore how factors like the ratio of rich to poor and the differences between their income affect the equilibrium allocation. 

It is well known that most of the standard specification tests are not valid when the alternative hypothesis is misspecified.  This is particularly true in the error component model, when one tests for either random effects or serial correlation without taking account of the presence of the other effect.  In this paper we study the size and power of the standard Rao’s score tests analytically and by simulation when the data is contaminated by local misspecification.  These tests are adversely affected under misspecification.  We suggest simple procedures to test for random effects (or serial correlation) in the presence of local serial correlation (or random effects), and these tests require ordinary least squares residuals only.  Our Monte Carlo results demonstrate that the suggested tests have good finite sample properties for local misspecification, and in some cases even for far distant misspecification.  Our tests are also capable of detecting the right direction of the departure from the null hypothesis.  We also provide some empirical illustrations to highlight the usefulness of our tests.

This paper endogenizes exclusive dealing through a distribution channel in auction markets.  In particular, it is demonstrated that a seller prefers to exclude final consumers and sell only to re-sellers when these resellers can gain access, at a cost, to a sufficiently bigger market than the seller himself.  The intuition behind this result is that the re-sellers can recoup their expenses for marketing the item by re-selling it to the final consumers.  If some of the consumers participate in the first auction and are outbid by the resellers, this is an indication that their values for the item are relatively low.  Outbidding part of their customer base is ‘bad news’ for the resellers, and this depresses their bids when final consumers compete with them.  The socially optimal and revenue maximizing choices of auction format may not coincide.  It is possible that restricting participation of consumers is socially optimal but privately sub-optimal and vice-versa.  The results of this paper suggest that (i) the exclusion of final consumers in some auctions may not be driven by transaction costs considerations, and (ii) sellers should not necessarily sell directly to consumers even though new technologies, such as electronic/Internet trading, allow them to do so at essentially zero cost.

Members of a bidding ring often use a private sale among themselves, known as the knockout, to decide who will buy the object that the ring has secured.   The difference between the price realized in this private sale and the price paid by the ring in the public auction is divided, on the basis of a linear sharing rule, among the ring members.  These side-payments provide an incentive for the ring members to bid higher than they would have in an identical public auction.  As a consequence, neither the realized price in the private sale nor the total payments of the winner are unbiased estimates of the price the item would have fetched in the public auction in the absence of collusion.  This paper evaluates the extent of this overestimate in the independent private values case, for both first price and second price post-auction knockout sales.  Bids are not independent of the sharing rule but transfers from the winning bidder are.  Further, bidder payoffs are independent of both the auction format and the sharing rule.  Finally, it is shown that the “overbidding” in the knockout sale is increasing with the dispersion of bidder valuations and is of significant empirical relevance.  The results of this paper can be used to obtain an unbiased assessment of the damages inflicted on the seller.

Tests based on the quantile regression process can be formulated like the classical Kolmogorov-Smirnov and Crámer-von-Mises tests of goodness-of-fit employing the theory of Bessel processes as in Kiefer (1959).  However, it is frequently desirable to formulate hypotheses involving unknown nuisance parameters, thereby jeopardizing the distribution free character of these tests.  We characterize this situation as “the Durbin problem” since it was posed in Durbin (1973), for parametric empirical processes.

In this paper, we consider an approach to the Durbin problem involving a martingale transformation of the parametric empirical process suggested by Khmaladze (1981) and show that it can be adapted to a wide variety of inference problems involving the quantile regression process.  In particular, we suggest new tests of the location shift and location-scale shift models that underlie much of classical econometric inference.

 The methods are illustrated in some limited Monte-Carlo experiments and with a reanalysis of data on unemployment durations from the Pennsylvania Reemployment Bonus Experiments.  The Pennsylvania experiments, conducted in 1988-89, were designed to test the efficacy of cash bonuses paid for early reemployment in shortening the duration of insured unemployment spells.

An ethnographic study of a network marketing organization examines the practices and processes involved in managing members’ organizational identification.  Specifically, it argues that this organization manages identification by using two types of practices:  sensebreaking practices that break down meaning, and sensegivng practices that provide meaning.  When both sensebreaking and sensegiving of practices are successful, members positively identify with the organization.  When either sensebreaking or sensegiving practices fail, members deidentify, disidentify, or experience ambivalent identification with the organization.   A general model of identification management is posited, and implications for both theory and practice are offered.

This paper investigates bidding patterns in sequential auctions using data from a sale of rare books by a non-profit institution.  The data has features of a field experiment:  Lots were arranged in alphabetical order and the reserve was set non-strategically.  Further, half the bids were placed by mail-in bidders for whom the auction was simultaneous in a temporal sense.  The nature of the data allows us to distinguish between the effects induced by the sequential nature of the catalogue from those induced by the sequential nature of the sale.  We estimate trends in the probability of sale, number of submitted bids, expected prices, and price variability separately for mail-in and floor bidders.  We document the existence of distinct ‘catalogue’ effects on bidding trends and determine their causes.  We demonstrate that these catalogue effects also influence floor bidder behavior.  Taken together, our results show that (i) as the auction progresses, bidders become more aggressive in competing for some lots, but become disinterested in others, and (ii) bidding patterns in sequential auctions are a composite of ‘catalogue’ and ‘order of sale’ effects.  The catalogue effects are of non-strategic nature, while the order-of –sale effects are possibly due to strategic behavior.  Finally, we derive a theoretical model that is consistent with the empirical findings.

In this paper, I study one of the largest changes in the overtime provisions of the Fair Labor Standards Act (FLSA) in the last two decades – the extension of coverage eligibility for state and local government workers.  The Supreme Court’s 1985 decision in San Antonio Metropolitan vs. Garcia made 80 percent of state and local government workers eligible to receive compensation for overtime hours worked.  Using a difference-in-differences approach, I compare hourly state and local government workers to multiple control groups, specifically hourly federal government employees and salaried state and local government employees.  Although the cost of overtime went up for public sector employees under the law, surprisingly there is little evidence that the overtime hours and amount of overtime worked by the treated workers went down relative to the control groups.  At a minimum, there is no change in overtime hours, and for many of the groups, overtime hours actually went up after the change.  The behavior of public sector workers seems to be consistent with a Coasian model where overtime provisions are explicitly bargained by the parties involved, likely making the overtime legislation less important.  To further explore overtime coverage for unionized public sector workers, I collected a data set of union contracts from the American Federation of State and County Municipal Employees that represents almost 65,000 public sector workers in the state of Illinois in 1985.  This case study evidence shows the majority of unionized workers in the public sector in Illinois did have explicitly stated overtime coverage in their union contracts, which was at a minimum, the same as the provisions granted to them in the Garcia decision.

This paper provides new evidence on the economic consequences of unilateral divorce laws on the future labor market outcomes of children.  Using a cohort of young adults from the 1990 census, we examine the effect of living in a unilateral divorce state as a child on education, earnings, and marital status.  Women with many years of childhood exposure to unilateral divorce laws have lower wages and have completed less schooling. However, there is no statistically significant effect of unilateral divorce exposure on men’s wages.  Both women and men are more likely to marry and less likely to get divorced with more years of exposure to unilateral divorce as a child.  We also explore alternative mechanisms through which unilateral divorce laws can affect children’s outcomes.  The evidence suggests that while divorce rates did increase significantly as a result of the laws, bargaining power within the household is also an important factor driving our results.

We develop a model that encompasses both the incomplete contracts that are used in practice and the idealized complete contracts that address all contingencies.  The objectives of the paper are to (i) examine the extent of the inefficiency caused by the constraint of contractual incompleteness;  (ii) to identify properties of agents’ preferences that determine whether or not incompleteness causes inefficiency;  and (iii) to analyze the implications of the incompleteness constraint on optimal contracts in principal-agent and bilateral trade models.

Firms are embedded in a network of relationships that influence the flow of resources among them.  The dynamic resource flows and differentiated structural positions lead to asymmetries across the firms of the network, significantly influencing their competitive behavior toward each other.  We develop a multi-level conceptual model and offer propositions relating key network properties to competitive dynamics.  Such a structural embeddedness perspective advances our understanding of competitive dynamics and provides intriguing possibilities for future research.

This paper considers a financier contemplating a venture capital investment in a firm whose true value is unknown.  The financier must make information- gathering and investment decisions on an ongoing basis to decide whether to undertake the investment and, later, if he chooses to finance the firm, how to manage his investment.  We characterize how the financier’s information acquisition is affected by the liquidity of the market for his claims to the firm, and derive the implications for the pricing of the firm.  We distinguish between two qualitatively different types of information acquisition:  evaluation efforts made prior to a potential investment; and monitoring efforts of already-funded firms that impact upon the financier’s decisions about whether to take an active position in the firm  (e.g. replace management)  and whether to change its financial stake.  We investigate the effects of liquidity on share price, describing why the market responds more favorably to less liquid forms of finance, and explore the consequences for investor activism.  Finally, we characterize the socially optimal levels of evaluation and monitoring in order to determine when a marginal increase in liquidity has welfare-enhancing effects on the financier’s behavior.

The Taft-Hartley Act gives the President and federal courts extraordinary power to enjoin lawful strikes that pose a threat to national health or safety. Although rarely used, this power has great impact. We show that Taft-Hartley injunctions lowered public support for unions by portraying them as selfish economic actors who were harmful to the nation, and altered the balance of bargaining power in critical strikes, usually to the detriment of unions. 

We trace these injunctions to their common law antecedents from the 1820s. We show that courts routinely abused their equitable powers in labor disputes. Our research shows that Taft-Hartley courts have failed to avoid this pitfall. They (1) failed to exercise judicial powers, (2) relied on distorted assumptions to support injunctions, (3) interpreted national health to mean national inconvenience, (4) favored the government - and by extension, powerful employers - by granting eighty percent of the petitions for injunctions, and (5) issued impractical orders.

Although the last Taft-Hartley injunction was issued in the 1970s, this public policy remains relevant in two respects. When major strikes affect the nation - most recently, the 1997 Teamsters strike at UPS - presidents respond to growing public pressure by threatening to invoke this power. Although politically expedient, this power undermines a main tenet of the National Labor Relations Act that permits unions to strike in support of their bargaining proposals. In addition, our research questions a current theory explaining that the sharp decline in strikes resulted from President Reagan’s use of the striker replacement doctrine in the PATCO strike. By showing that President Carter’s use of Taft-Hartley in the 1977-1978 national coal strike caused the first sharp drop in strikes, and by demonstrating that this law was intended to impair this right, we show that the presidency plays a more complex role in the dying right to strike.

Skills associated with literacy and numeracy carry important implications for consumers.  However, past research on illiterate consumers is almost non-existent.  In this study, a variety of methods, such as in-depth interviews and observations in a shopping environment, were used to understand consumer illiteracy and innumeracy.  The sample consisted of students at an adult education center.  Themes from in-depth interviews of students and teachers suggest a high degree of concrete thinking exemplified by dependence on audio-visual information, short-term orientation, use of numbers as concrete information, intuitive processing, and an emphasis on contextual learning.  Related themes include dependence on others and maintenance of self-esteem in service encounters.  Behavioral outcomes observed include perceptual decision-making, and high loyalty to retail outlets.  These themes were more accentuated at lower levels of literacy, where extreme dependence on others and the use of rudimentary defensive practices to avoid negative experiences were common.  Observations from classroom activities and tutoring and from shopping trips reinforced existence of these themes and suggested a model of decision-making with little effort spent on evaluation of alternatives.  From an information processing perspective, most of the effort is spent on perceptual processes such as locating a product and determining price information.  This research raises fundamental theoretical issues relating to the adequacy of existing models of consumer behavior in capturing decision-making of illiterate consumers, as well as important practical implications for marketers.

This paper presents an equilibrium explanation for the persistence of naïve bidding.  Specifically, we consider a common value auction in which a “naïve” bidder (who ignores the Winner’s Curse) competes against a fully rational bidder (who understands that her rival is not).  We show that the naïve bidder earns higher equilibrium profits than the rational bidder when the signal distribution is symmetric and unimodal.  We then consider a sequence of such auctions with randomly selected participants from a population of naïve and rational bidders, with the proportion of bidder types in the population evolving in response to their relative payoffs in the auctions.  We show that the evolutionary equilibrium contains a strictly positive proportion of naïve bidders.  Finally, we consider more general examples.  In these examples (i) a naïve bidder matched against a rational bidder does worse than his rational opponent but (ii) a naïve bidder matched against a rational bidder does better than a rational bidder matched against another rational bidder.  Again, the evolutionary equilibrium contains a strictly positive proportion of naïve bidders

Previous research on joint ventures using a transaction costs perspective has found an ex ante relationship between the contextual threats addressed by TCE and the presence of various mechanisms designed to cope with these threats.  Our study extends this research by examining whether the adoption of such mechanisms impacts the subsequent success of these alliances.  The results help to clarify the usefulness and limitations of these TCE-related constructs, while raising important considerations for future research.

This paper studies the behavior of the default-risk-free term structure and term premia in two general equilibrium endowment economies with complete markets but without money.  In the first economy, there are no frictions as in Lucas (1978) and in the second, risk-sharing is limited by the risk of default as in Alvarez and Jermann (2000ab).  Both models are solved numerically, calibrated to UK aggregate and household data, and the predictions are compared to data on real interest rates constructed from the UK index-linked data.  While both models produce time-varying risk or term premia, only the model with limited risk-sharing can generate enough variation in the term premia to account for the rejections of expectations hypothesis.

In today’s business landscape, the familiar traditional corporation has been augmented by new species such as joint ventures, strategic alliances, and franchise chains.  The properties of these new species, termed hybrid forms, are distinctly different from the traditional corporation.  In this paper we examine whether one hybrid form, the franchise chain, can coordinate elements of the marketing mix (price, quality, and advertising) in the pattern suggested by theory and achieved by the traditional corporation.  Results suggest they cannot.  Therefore, franchise chains appear to be unable to coordinate the elements of the marketing mix.  Implications for the theory and practice are discussed.

The score function is associated with some optimality features in statistical inference.  This review article looks on the central role of the score in testing and estimation.  The maximization of the power in testing and the quest for efficiency in estimation lead to score as a guiding principle.  In hypothesis testing, the locally most powerful test statistic is the score test or a transformation of it.  In estimation, the optimal estimating function is the score.  The same link can be made in the case of nuisance parameters:  the optimal test function should have maximum correlation with the score of the parameter of primary interest.  We complement this result by showing that the same criterion should be satisfied in the estimation problem as well.

This paper develops a flexible parametric approach to capture asymmetry and excess kurtosis along with conditional heteroskedasticity with a general family of distributions for analyzing stock returns data.  Engle’s (1982) autoregressive conditional heteroskedastic (ARCH) model and its various generalizations can account for many of the stylized facts, such as fat tails and volatility clustering. However, in many applications, it has been found that the conditional normal or Student’s t ARCH process is not sufficiently heavy-tailed to account for the excess kurtosis in the data.  Moreover, asymmetry in financial data is rarely modeled systematically.  Therefore, there is a real need to find an asymmetric density that can be easily estimated and whose tails are heavier than the Student’s t-distribution.  Pearson type IV density is such a distribution, and it is much easier to handle than those that have been used in the literature, such as non-central t and Gram-Charlier distributions, to account for skewness and excess kurtosis simultaneously.  Pearson type IV distribution has three parameters that can be interpreted as variance, skewness and kurtosis; and they can also be considered as different components of the risk premium. Modeling simultaneously time-varying behavior of mean, variance, skewness and kurtosis produces a better explanation of risk than mean-variance analysis only. These methodologies can also be used to analyze other financial data such as exchange rates, interest rates and spot and future prices.

One of the main ingredients in forming an international portfolio is the correlation matrix. The correlations represent the degree of interdependence across markets.  With the recent globalization of markets and increased volatility, we can expect these correlations to change over time, and quite possibly to go up.  However, the standard practice in modeling asset return dynamics is to assume constant correlation.  This parameterization is simple, and it involves a relatively small number of parameters.  However, the validity of this assumption remains an empirical question.  This paper is concerned with developing a formal test for constancy of correlation, and applying it to financial markets of the USA, Japan, Germany, the UK, France and Italy.

Chesher and Jewitt (1987) demonstrated that the Eicker (1963) and White (1980) consistent estimator of the variance-covariance matrix in heteroskedastic models could be severely biased if the design matrix is highly unbalanced.  In this paper we, therefore, reconsider Rao’s (1970) minimum norm quadratic unbiased estimator (MINQUE).  We derive the analytical expressions for the mean squared errors (MSE) of the Eicker-White, one of MacKinnon and White’s (1985) and MINQUE estimators, and perform a numerical comparison.  Our analysis shows that although MINQUE is unbiased by construction, it has very large variance particularly for the highly unbalanced design matrices.  Since the variance is the dominant factor in our MSE computation, MINQUE is not the preferred estimator in terms of MSE comparison.  We also studied the finite sample behavior of the confidence interval of regression coefficients in terms of coverage probabilities based on different variance-covariance matrix estimators.  Our results indicate that although MINQUE generally has the largest MSE, it performs relatively well in terms of coverage probabilities.

The lack of a supra-national legal authority that can enforce private contracts across borders makes debt repayment in an international setting contingent on borrowers’ willingness to pay rather than ability to pay.  This market failure (i.e., inadequate enforcement) causes investment to fall short of its unconstrained level.  This paper examines how foreign aid affects a country’s willingness to honor private investment agreements.  We consider two types of aid, technical assistance and loan subsidies.  We show that when enforcement is inadequate, aid has the following effects: (i) it reduces default risk, promotes capital flows and can in principle restore investment to its unconstrained level, (ii) when default risk is high, aid can increase the welfare of both the recipient and the donor country. Thus, in this sense, foreign aid serves as an enforcement mechanism in an international setting.  This provides a non-altruism-based rationale for foreign aid and may provide a basis for the existence of multilateral organizations that offer such services.