Bureau of Economic and Business Research   

_______________________________________________________________________

1999 Abstracts for Working Papers


99-0100
Using a high frequency data-set of advertised prices in the personal computer industry, we find that firms which introduced Pentium computers late in the "buy Direct" segment of the market command a higher price premium compared to early entrants. This is true even among firms which have the same price premium for their 486 computers, but is more pronounced for high quality firms. Over time, the difference in the Pentium price premia of the late versus the early entrants decline to levels of the difference in the corresponding 486 price premia. The decline in the relative Pentium price premia contributes to a decline in overall price dispersion, while price dispersion for 486 computers remains constant. These results suggest that late entrants reap short run rents from these consumers that are loyal enough to them to have waited until their entry in order to purchase. They also suggest a rapidly declining price premium for quality over the product cycle. In light of these findings, brand coefficients in hedonistic regressions using high frequency data should not be interpreted as capturing unobserved quality only.


99-0101
We introduce a goodness of fit process for quantile regression analogous to the conventional R2 statistic of least squares regression. Several related inference processes designed to test composite hypothese about the combined effect of several covariates over an entire range of conditional quantile functions are also formulated. The approach is illustrated with some hypothetical examples, an application to recent empirical models of international economic growth, and some Monte-Carlo evidence.


99-0102
Estimation of composed error frontier models is generally conducted under certain strict assumptions. In practice however, these assumptions are not tested thoroughly. This is probably because simple workable tests are not yet available for these models. This paper develops easily computable specification tests for half-normal composed error frontier models. The tests are based on the information matrix (IM) and moment test principles. These tests are applied to the well – known Cowing (1970) steam-electric data set. Our tests reveal no serious misspecification of the cost model, while for the output model the null hypothesis of correct specification of rejected strongly.


99-0103
In the present paper, we analyze the effect of unemployment benefits on unemployment. We do this by elaborating on a model developed by Miyagiwa (1988) in several respects. First, we establish Walras’ law and give an explicit consideration to the government budget balance. We also incorporate unemployment benefits indexed by the competitive wage rate. Second, we use the correspondence principle a la Samuelson for our tax incidence analysis, taking a local stability condition into account. These considerations lead to an opposite conclusion to that of Miyagiwa.


99-0105
Offering a variety of products is important for a firm to attract different consumer segments. However, high product variety increases production and distribution costs. Modular product design and parts commonality are approaches used to counter this trend in cost and still offer a variety of products. This paper develops a model to examine when modular products should be introduced and how much modularity to offer. The model looks at a market consisting of a high segment and a low segment. Customers choose the product that maximizes their surplus, which is defined as the product’s utility minus its price. Presence of commonality affects the utility of a product. Greater commonality decreases production cost but makes the products more indistinguishable from one another. This makes the products more desirable for the low segment but less desirable for the high segment. The firm’s objective is to design the products and set the prices so as to maximize its profit.



99-0106
This paper explores the origins and evolution of product markets from a socio-cognitive perspective. Product markets are defined as socially-constructed knowledge structures (i.e., product conceptual systems) that are shared among producers and consumers – sharing that allows consumers and producers to interact in the market. Our fundamental thesis is that product markets are neither imposed nor orchestrated by producers or consumers, but evolve from producer-consumer interaction feedback effects. Starting as unstable, incomplete, and disjointed conceptual systems held by market actors – which are revealed in the cacophony of uses, claims, and product standards that characterize emerging product markets – we propose that product markets become coherent as a result of consumers and producers making sense of one another’s behaviors. We further argue that the sensemaking process is revealed in the stories that consumers magazines and producers tell one another in published media, such as industry newspapers and consumer magazines, which we use as date sources. Specific hypothesis pertaining to the use of product category labels in published sources and the acceptability of different product category members throughout the development process are tested for the minivan market between 1982 and 1988. Our findings suggest that category stabilization causes significant differences between consumers and producers in how they use product category labels for emerging and pre-existing categories. Our findings also show that as stabilization occurred around a category prototype, the acceptability of particular models changed without any physical changes to the models.


99-0107
Tiebout’s basic claim was that when public goods are local, competition between jurisdictions solves the free riding problem in the sense that equilibria exist and are always Pareto efficient. Unfortunately, the literature does not quite support this conjecture. For finite economies, one must choose between notions of Tiebout equilibrium which are Pareto optimal but which may be empty, or which are nonempty but may be inefficient. This paper introduces a new equilibrium notion called migration-proof Tiebout equilibrium which we argue is a natural refinement of Nash equilibrium for a multijusrisdictional environment. We show for sufficiently large economies with homogeneous consumers, such an equilibrium always exists, is unique, and is asymptotically Pareto efficient.


99-0108
The appealing simple example of multi-unit uniform-price auctions with constant, uniformly distributed marginal values has atypically many equilibria.


99-0109
In an economy with a continuum of individuals, each individual has a stochastic, continuously evolving endowment process. Individuals are risk averse and would therefore like to insure their endowment processes. It is feasible to obtain insurance by pooling endowments across individuals because the processes are mutually independent. We characterize the payoff from an insurance contracting scheme of this type, and we investigate whether such a scheme would survive as an equilibrium in a noncooperative setting.


99-0110
This paper provides a comprehensive economic analysis of scheduling decisions in an airline network. Although it is widely believed that the growth of hub-and-spoke networks has raised fight frequencies, the only analysis of this question is contained in a recent paper by Berechman and Shy (1998), who analyze an incomplete model. The present analysis shows that flight frequency is higher in a hub-and-spoke (HS) network than in a fully-connected (FC) network, confirming the conventional wisdom. It is also shown that some passengers who could make a connecting trip under the HS network may find the existing flights not sufficiently convenient given their long duration, thus choosing not to travel. By contrast, all passengers choose to travel under the FC network. The welfare analysis shows that the airline provides excessive flight frequency relative to the social optimum in both the FC and HS cases, and that its choice of network type exhibits an inefficient bias toward the HS network.


99-0111
This century and the history of modern statistics began with Karl Pearson's, [181], (1900) goodness-of-fit test, one of the most important breakthroughs in science. The basic motivation behind this test was to see whether an assumed probability model adequately described the data at hand. Then, over the first half of this century we saw the developments of some general principles of testing, such as Jerzy Neyman and Egon Pearson's,[159], (1928) likelihood ratio test, Neyman's,[155], (1937) smooth test, Abraham Wald's test in 1943 and C.R. Rao's score test in 1948. All these tests were developed under the assumption that the underlying model is correctly specified. Trygve Haavelmo,[99], (1944) termed this underlying model as the priori admissible hypothesis. Although Ronald Fisher,[80], (1922) identified the "Problem of Specification" as one of the most fundamental problems in statistics much earlier, Haavelmo was probably the first to draw the attention to the consequences of misspecification of the priori admissible hypothesis on the standard hypothesis testing procedures. We will call this the type-III error. In this paper, we will deal with a number of ways that an assumed probability model can be misspecified, and discuss how some of the standard tests could be modified to make them valid under various misspecification.


99-0112
Agency problems in inter-firm trading relationships are severe in developing and transitional economies because of the limited decentralized information that can support contract enforcement and because the timing of intermediate goods production and payment differ. We derive the consequences for the equilibrium distribution of firm structures, production, prices, profits and trade in developing and transitional economies. Within a multi-market setting, we characterize equilibrium outcomes both for firms that are directly affected by contracting problems and those that are not. The equilibrium features both excessive vertical and excessive development of small scale retail enterprises; and insufficient, inefficient inter-firm trade. Average profits of vertically integrated firms are higher and those of small scale retail enterprises and intermediate suppliers are lower than they would be were enduring trading relationships more easily established.


99-0113
This paper looks at the strategic behavior of multinational corporations (MNCs) in less developed countries. We start with the premise that MNCs have superior technologies, and that domestic firms lack informational access to this superior technology. Workers trained on the MNC technology can transmit the information to competing or start-up domestic firms. We derive conditions under which multinationals strategically offer their workers higher wages than domestic firms to prevent domestic firms form poaching their workforce and learning their trade secrets. With quantity competition in the domestic market, MNC’s offer higher wages if and only if their technology advantage is sufficiently great and the output goods are sufficiently close substitutes. With price competition, the MNC always offers the wage premium. We also derive conditions under which multinationals inefficiently divide job tasks between workers in order to raise the number of trained workers that the domestic form must acquire to learn the MNC’s trade secrets.


99-0114
On the NYSE and exchanges that feature open limit order books, larger orders receive worse prices. Accordingly, market microstructure theory has focused on developing consistent models. However, on exchanges such as the London Stock Exchange, NASDAQ and FX markets, larger orders receive better prices on average. In this paper, we argue that differences in market design can explain this finding. On the LSE and NASDAQ, in sharp contrast to the NYSE or exchanges that feature open limit order books, competition between dealers is largely intertemporal: a trader identifies a particular dealer and negotiates a final price with only the intertemporal threat to switch dealers imposing pricing discipline on the dealer. We show that dealers will offer greater price improvement to more regular customers, and, in turn, these customers optimally choose to submit larger orders.

Hence empirically, we predict that there should be a striking difference between the relationship between price impact and trade size on exchanges where all orders are immediately exposed to price competition, such as the NYSE, where price impact and trade size should be positively correlated, and exchanges where competition is largely intertemporal, such as London, where price impact and trade size should be negatively correlated. We derive the implications for inter-dealer trade, dealer profits, and find testable restrictions for pricing across different traders and order sizes. We test the predictions using data from the LSE. The results offer strong support for our hypothesis that on the LSE, broker-dealer relationships drive pricing.


99-0115
This paper derives and tests the implications of relative performance incentives for the forecasts of financial analysts. If, in addition to compensation for absolute forecast accuracy, analysts are compensated on the basis of how their forecasts compare with those of other analysts, then earlier forecasts affect later announcements. The theoretical predictions regarding the direction of bias are tested using data on individual analysts’ forecasts of earnings per share (EPS). We find very strong evidence that the last analyst to report a forecast "over-emphasizes" his private information by reporting a contrarian forecast that overshoots the consensus (mean) forecast. We also find modest evidence that investors do not unravel biases in the forecasts of firms followed by fewer analysts.


99-0116
Taking a social constructivist perspective, this paper presents a socio-cognitive framework for understanding the dynamics of mature competitive markets.  Its fundamental premise is that mature competitive markets are as dynamic as emerging ones because the enactment-sensemaking cycles in which producers and consumers engage, and the constant recalibration of product conceptual systems that such activity engenders, are  present in both types of markets.  Furthermore, it argues that the dynamic processes unleashed by enactment-sensemaking behaviors are evident in the market stories used by producers and consumers to make sense of enacted outcomes or experiences.  The value of market stories to understanding market dynamics is illustrated by a rudimentary testing a set of simple propositions.  Qualitative support is found for some of the propositions, and the paper concludes by discussing several theoretical and managerial implications of the socio-cognitive perspective as it pertains to understanding and managing in mature competitive markets.


99-0117
We examine the rate of convergence to efficiency in the buyer’s bid double auction for sequences of markets in which the number m of buyers can be arbitrarly larger than the number n of sellers.  This rate is shown to be O(n/m2)  when m, n are such that m≥β for a constant β> 1.  This is consistent with the O(1/m) rate that holds when 1/β≤ n/m≤ β,  which is proven in Rustichini et al.  (1994).  Consequently, the single formula O (n/m2) developed in this paper expresses the rate of convergence to efficiency for all sequences of m and n for which n/m is bounded above.


99-0118


99-0119
An unresolved question is—Do strategic repurchase programs create long-run firm value?  An objective of this paper is to analyze the long-run growth in value of companies that strategically repurchase shares vis-á-vis those that do not pursue a share buyback strategy.  Prior studies have not focused on the linkage between share repurchases and the growth in firm value.  The analysis focuses on the hypothesis that the growth in the value of firms initiating repurchase strategies is greater than the growth in the value of the matched companies.  The results show that in the short-run companies that pursue a strategy of repurchasing shares have a higher growth rate than compnies not exercising a buyback strategy.  However, the results are not statistically significant.  The study indicates that in the long-run firms create more value with a strategy of not repurchasing its shares.  Additionally, it was discovered that small and mid-size non-repurchasing companies outperform firms employing a share buyback strategy.  Regression analysis was used to test the relationship between growth in firm value and the performance of the free cash flow components of the two types of firms in the sample.  The results show that investment by non-repurchasing companies in net working capital and capital projects were instrumental in their outperforming companies using a repurchase strategy.  In conclusion, the findings do not support the theory that share repurchase programs are related to management signaling an increase in a firm’s long-run performance in the market.  Nor does the study show that a strategy to repurchase shares signals that shares are undervalued.


99-0120
We develop a theoretical model of the dynamics of an industry over the business cycle.  We characterize the intertemporal evolution of the distribution of firms, where firms are distinguished by their capital in place and the productivity of their technology.  We contrast investment and exit decisions and their consequences for aggregate output, profits and productivity distributions:  (a.) across different demand realization paths; (b.)  along a demand history path, detailing the effects of continued good or bad market conditions; and (c.) we consider the impact of different anticipated future market conditions.  We also characterize exit rates by age, size and productivity.  The theoretical model generates predictions that are broadly consistent with empirical findings:  downturns in demand lead to higher exit of inefficient firms and hence increased future productivity;  recessions are shorter and sharper than expansions; younger, smaller or less productive firms are more likely to exit; etc.


99-0121
A theoretical framework is proposed in which consumers and producers are seen as market actors coupled in behavior-cognition cycles - cycles of enacted behavior and sensemaking that generate product knowledge.  Repeated knowledge and sensemaking over product concepts give rise to product conceptual systems shared by market actors.  The arguments are grounded in the development of the paper clip product market, and researchable propositions focused on discernible elements of knowledge structures shared by producers and consumers are presented throughout the discussion.  A methodological argument is made for using market stories as data, and for the application of network analysis tools to assessing change in the conceptual frames of market actors over time.  The implications of the framework to marketing theory and practice are also discussed.

99-0122
Matched Model price indexes are generally thought to over-estimate the quality-adjusted price level.  This bias stems from the fact that only a fraction of the models are sold in consecutive sampling periods and that the price/performance ratio of these models is worse than that of new models.  The “unrepresentativeness” of the sample used to construct such indexes could potentially be reduced by obtaining higher frequency data, thus increasing the fraction of models that are matched.  We propose a set of matched model price indexes that is conformable with regression based price indexes in order to test this hypothesis.  Using computer prices from the Buy Direct press we find that the bias in the matched model price index increases with the sampling frequency.  This result suggests that models that last for only a brief time are models for which the price/performance ratio has deteriorated very rapidly.  Thus, increasing the sampling frequency actually increases the selection bias.


99-0123
This paper examines measurement error in terms f conceptual meaning and operationalization  in traditional empirical procedures, with a view toward deriving implications for measure and method development work.  Specifically, the paper examines (i) the concepts of random and systematic errors, (ii) the operationalization of these concepts in traditional reliability and validity procedures, and (iii) the implications for minimizing random and systematic error in the development of a single measure as well as in the design of an entire method.  Several useful conceptual distinctions are drawn here between idiosyncratic and generic random error (within and across administrations) and additive and correlational systematic error (within and across measures).  The level of correspondence between fundamental concepts of measurement error and their operationalizations in traditional measurement procedures leads to important considerations during measure development and measure validation as well as method development.  This research prescribes explicit conceptual examination of the nature of measurement error in conjunction with traditional empirical procedures of measure development and validation as well design of methodology.  Measurement error occurs in the operationalization of the concepts of measurement error with important implications for measure and method development in scholarly research in marketing and the social sciences.


99-0124
The consequences of a penalty exemption available to U.S. taxpayers who disclose aggressive reporting positions is examined via a game theoretic model.  Results indicate that  (1) the tax agency’s expected revenue collections  (net of audit costs) decline under the disclosure exemption, and  (2) the impact of disclosure regulations depends on the taxpayer’s type.  Of particular interest, the authors find that taxpayers who are likely to prevail on an uncertain issue decrease their expected payments although they don’t disclose in equilibrium.  The impact on the amount of resources absorbed by the tax collection process is also examined.


99-0125
At equilibrium with respect to bids, entry and information acquisition, oral auctions can generate significantly more revenue than sealed-bid auctions even in the case of independently-distributed privately-known values.


99-0126
Hybrid organizational forms such as franchise systems join two or more independent parties under a contract.  The ability of each party to achieve its goals depends upon the relative bargaining power in the relationship established by the contract.  Using transaction cost economics and Porter’s (1980) characterization of sources of bargaining power, this paper argues that the franchiser can make investments in activities such as tapered integration and buyer selection to increase its bargaining power and decrease conflict and litigation in a franchise system.  Specifically, tapered integration (owning some units while franchising others), selecting inexperienced franchisees, and employing a long training program are predicted to increase the franchiser’s bargaining power and the franchisee’s compliance with franchiser standards.  An empirical analysis of litigation in restaurant franchise systems supports the theoretical hypotheses.


99-0127
The organizational form of franchising has been shown to yield higher profits and faster growth through reducing agency costs.  Why then does anyone not franchise?  In this paper, I argue that the diffuse residual claims of the franchise system reduce overall system quality, and that this problem is inherent in the nature of franchising.  The theory is tested by examining evidence from both the restaurant and the hotel industries, including chains that franchise and ones that own all of their units.  In both industries, quality is negatively related to the percent franchising in the chain, controlling for size, growth in units, monitoring costs, market segment, ownership structure, multichain operation, and price.  The results suggest that the franchise contract increases free-riding and decreases quality in decentralized service chains, and that quality is not contractible in this setting.


99-0128
Did the students get it?  This question frequently confronts finance professors following the presentation of lectures to beginning corporate finance students.  Performance on future examinations and in higher level finance courses suggests that beginning students "do not get it" as clearly as professors would like to believe.  One objective of this study is to examine if beginning students in a corporate finance class can identify the most important points in the lecture and determine the points they least understand.  Another objective is to describe and use Angelo's Fourteen Principles for improving higher learning to help students better understand the material and to improve their performance.  Finally, Angelo's principles are used to show faculty how to improve their teaching skills.  The paper provides insights for finance professors to become better teachers, to create an improved learning environment, and to enhance student learning.


99-0129
This paper proposes a methodology for the construction of an index of impact of economic activities on pollution from the perspective of the technological linkages among those activities.  The index is calculated from an input-output model and continuous strictly monotonic functions mapping subsets of the production space for all activities into the unit interval.  As a case study, I focus on industry and water pollution in a region with significant industrial production, environmental degradation and population in Brazil.  I then compare the index for the different industrial sectors with the direct and indirect employment they generate, and suggest a partial measure of sustainability of local economic development.  Local economic development is heavily based on water pollution.  The economic activities responsible for most production and employment in the region are also associated with most direct and indirect pollution of the local water resources.  The extension of the proposed methodology to other economic activities and types of pollution is straightforward.


99-0130
In this paper, I look at the effects of long work hours on divorce using both cross-sectional and panel data from the 1991-1993 Surveys of Income and Program Participation.  For the majority of Americans, 40-hour workweeks are still typical.  Both men and women who work long hours (50 or more per week) have higher earnings and more education than those who work fewer hours do.  On average, adding ten hours to the average husband’s work week raises the probability of divorce by between one-tenth and one-half of one percentage point; while adding ten hours to the average wife’s work week raises the probability of divorce by two tenths of one percentage point.  I also find higher income for men reduces the probability of divorce, but higher income for women increases the probability of divorce.  Finally, evidence from separations and instrumental variables estimates that rely on involuntary work schedules provide weak evidence that the OLS estimates may be biased upward by people entering work at the time of a divorce.