of Economic and Business Research
Abstracts for Working Papers
This paper deals with the estimation of optimal hedge ratios. A number of recent papers have demonstrated that ordinary least squares (OLS) method which gives constant hedge ratio is inappropriate and recommended the use of bivariate autoregressive conditional heteroskedastic (BGARCH) model. In this paper we introduce the use of a random coefficient autoregressive (RCAR) model to estimate time varying hedge ratios. Using daily data of spot and futures prices of corn and soybeans we find substantial presence of conditional heteroskedasticity, and also of random coefficients in the regression of return from the spot market on the return from the futures markets. Hedging performance in terms of variance reduction of returns from alternative models are also conducted. For our data set diagonal vech presentation of BGARCH model provides the largest reduction in the variance of the return portfolio.
These notes are an informal, first installment in an ongoing project to develop a convenient template for computational experimentation in econometrics. The approach is illustrated by means of an example based on some current research with Steve Portnoy on improving the speed of quantile regression algorithms. The computations are carried out in SPLUS, but similar techniques could be adapted for any modern computing environment designed for statistical applications. The objective is to provide a reasonably automatic, almost painless, way to make experimental results self-documenting and reproducible. With minor modifications the same approach could be adapted to empirical applications.
It is well known that most of the standard specification tests are not valid when the alternative hypothesis is misspecified. This is particularly true in the error component model when one tests for either random effects or serial correlation without taking account of the presence of the other effect. In this paper we study the size and power of the standard Rao's score tests analytically and by simulation when the data is contaminated by local misspecification. These tests are adversely affected under misspecification. We suggest simple procedures to test for random effects (or serial correlation) in the presence of local serial correlation (or random effects), and these tests require ordinary least squares residuals only. Our Monte Carlo results demonstrate that the suggested tests have good finite sample properties and are capable of detecting the right direction of the departure from the null hypothesis. We also provide some empirical illustrations to highlight the usefulness of our tests.
This paper analyzes mortgage-market equilibrium when borrower default costs are private information. By applying the approach of Rothschild and Stiglitz (1976), it is shown that asymmetric information regarding default costs distorts the contract choices available in the mortgage market, preventing safe borrowers (those with high default costs) from fully satisfying their demand for mortgage debt. Large loans are available for a substantial interest-rate premium, but only risky borrowers find this premium worth paying. The paper builds on an empirical literature designed to test the ruthless-default principle from option-based models of mortgage pricing. This literature provides evidence against ruthless behavior, suggesting that default costs play an important role in borrower decisions. The paper takes a further step by arguing that such costs are private information, which has important implications for market equilibrium.
Even when policy goals are largely agreed upon in an economic context, it is not uncommon for disagreement to persist over which intervention should be used to achieve the objective. This paper provides an evaluation framework for a general theory of efficient market intervention to help decide between interventions. The methodology, based primarily on agent optimization, applies in general equilibrium with minimal assumptions about technology or preferences. It can be easily modified for application to a variety of potential market interventions. A class of theorems exemplifying a general intervention principle and offering insight into a number of representative policy prescriptions is proved. The intervention principle, associated with the work of Bhagwati, Corden, Kemp, Negishi and Srinivasan, is strongly validated and extended.
We construct a model of dynamic endogenous product innovation and international trade, using it to calculate the welfare effects of lower intellectual property rights (IPR) protection in the non-innovating South than in the innovating North. We find that it is generally in the North's interest to protect its innovating sector by an import embargo on IPR-offending goods from abroad. We explain the paradoxical outcome where the North gains from weaker IPR enforcement in the South through a decomposition of the dynamic welfare formula. Key features include the ability of lower Southern IPR protection to spur innovation of Northern goods and to make available greater resources for Northern production of current consumption goods. Maintaining Northern IPR standards can be in the South's interest even though the South would favor lower uniform levels of IPR protection.
We study an auction with two distinct type of potential bidders: consumers who wish to purchase the item for their own consumption and middlemen who wish to purchase the item for the purpose of reselling it to the final consumers. Typically, the behavior of the former is studied under the private values paradigm, while the behavior of the latter is studied under the common values paradigm. We consider the possibility that both types of bidders compete in the same auction. We show that, if the middlemen have access to a larger market of consumers than the auctioneer, then the auctioneer may prefer to prevent the consumers from participating in the auction. The intuition for this result is that the presence of consumers in the auction creates a "winner's curse" effect for the middlemen: In equilibrium, the latter win when part of their customer base has relatively low valuations. This effect makes the middlemen more conservative in their bidding when they compete with consumers. In the model we consider, middlemen can access a market of N consumers by spending a marketing cost c. In the auction that the auctioneer arranges, apart from the middlemen, only one randomly chosen consumer shows up. We show that as long as c>0, the auctioneer prefers the restricted auction, under which the consumer is prevented from participating.
It is occasionally observed that auctions are designed to exclude the participation of final consumers. Resellers are the only participants in these auctions. This behavior is sometimes rationalized on the basis of transaction costs: it is cheaper to deal with a small group of individuals on a frequent basis than it is to deal with a large group of individuals on an irregular basis. In this paper we demonstrate that there is no need to appeal to any transactions costs. In particular, we show that a seller would prefer to exclude final consumers from an auction and sell the item to resellers when these resellers can gain access, at a cost, to a sufficiently bigger market than the seller himself. The intuition behind our result is that the re-sellers can recoup their expenses for buying the item by reselling it to the final consumers. If some of them participate in the first auction and are outbid by the resellers this is an indication that their values for the item are relatively low. Outbidding part of their customer base is 'bad news' for the resellers and this depresses their bids when final consumers are competing with them. In fact, in the particular framework that we examine here the intermediaries do not bid at all if final consumers are present. The socially optimal and revenue maximizing choices of auction format are not guaranteed to coincide. Even though it is possible that restricting participation of consumers is both socially and privately (for the seller) optimal, it is also possible that restricting participating is socially optimal but privately sub-optimal and vice-versa.
We examine the blending of informational and political forces in organizational categorizations within the context of CEO compensation. By law, corporate boards are required to provide shareholders with annual justifications for their CEO pay allocations. These justifications must contain an explicit performance comparison with a set of peer companies that are selected by the board. We collected information on the industry membership of chosen peers from a 1993 sample of 280 members of the S & P 500. Our results suggest that boards anchor their comparability judgments within a firm's primary industry, thus supporting the argument that board peer definitions revolve around commonsense industry categories. At the same time, however, we also found that boards selectively define peers in self-protective ways such that peer definitions are expanded beyond industry boundaries when firms perform poorly, industries perform, CEO's are paid highly, and when shareholders are powerful and active.
This study presents learning styles as a technique that can help teachers of the core financial management course see how students learn and, simultaneously, improve their teaching performance and enhance student learning. A brief overview of the learning style literature is presented with a focus on Gregorc's learning styles. The empirical segment of the study is based on the learning styles of 483 undergraduate students in core financial management courses at three universities. A hierarchical loglinear model is used to test various hypotheses concerning the relationship among students' sex, race, academic major and learning styles. The analysis shows that within the sample there is no difference in the leaning styles of African American and Caucasian students. Also the learning styles of female student s in the sample are significantly different from male students. A user-friendly example provides suggested strategies for teaching present value to the four learning styles. Finally, specific recommendations and strategies are presented show how learning style information can improve teaching performance and student learning.
This paper develops a model of trade and industrial policy where the politicians in charge of the government can direct the rents generated by their policies toward their political or economic objectives through different channels: lobbing, taxation, regulation, and tariff and quota allocation. Different mechanisms are distinguished by their point of rent extraction and differences in resource waste for each dollar of transfer. In conjunction with industrial policy, specific asset formation is also endogenized. We show that many characteristics of the model's equilibria transcend specific channels of rent extraction that prevail. The parameters that represent the effectiveness of rent transfer through various channels play a mediating role. The results show that the relationships between these parameters and policy outcomes may be different from those based on single-channel models. We show that under reasonable conditions, a variety of parameter changes induce a positive relationship between the restrictiveness of policies toward domestic and foreign competition. This helps explain a number of important empirical regularities such as the positive association of protection with import penetration and output-capital ratio. The model also offers a guide for empirical research on the role of lobbying and other rent extraction mechanisms in policy-making.
The work of three leading figures in the early history of econometrics is used to motivate some recent developments in the theory and application of quantile regression. We stress not only the robustness advantages of this form of semiparametric statistical method, but also the opportunity to recover a more complete description of the statistical relationship between variables. A recent proposal for a more X-robust form of quantile regression based on maximal depth ideas is described along with an interesting historical antecedent. Finally, the notorious computational burden of median regression, and quantile regression more generally, is addressed. It is argued that recent developments in interior point methods for linear programming together with some new preprocessing ideas make it possible to compute quantile regressions as quickly as least squares regressions throughout the entire range of problem sizes encountered in econometrics.
Drawing on insights from organizational behavior and theory, we examine the phenomenon of multiple organizational identities and how they can be managed in organizations. Specifically, we suggest that multiple organizational identities can be managed by changing the numbers of (identity plurality) or the relationships between (identity synergy) the identities, and we offer a classification scheme that identifies four major types of managerial responses: compartmentalization, deletion, integration, and aggregation. In addition, we suggest several key conditions that may affect the use and appropriateness of these identity management responses, and we develop a series of testable propositions for future research.
An ethnography explores the role of religious values and beliefs in building an "ideological fortress": a worldview that is seemingly impervious to attack. Specifically, this study develops the metaphor of an ideological fortress and how spirituality serves as "bricks," "wall," and "mortar" in that fortress. Used in these ways, religious values and beliefs facilitate member sensemaking by helping to socially encapsulate members, and by patching up inconsistencies within the ideology (i.e., "ideological holes"). Implications for the role of spirituality in organizational sensemaking and control are discussed.
Building from an in-depth study of a library system's response to two different issues, we propose a theoretical account of the conditions that are conducive to issue ownership. At center stage in this account are the roles of emotions and social identities in determining whether and how an issue is seem as "belonging to" organizational members. We suggest that by better understanding the social-psychological processes that explain patterns of issue ownership, we can better understand issue-related action and inaction in organizations.
Cooperative advertising, wherein a manufacturer, either directly or indirectly, reimburses a retailer for some or all of the cost promotional advertising is a form of trade promotion that manufacturers offer retailers to stimulate retail demand. Three forms of cooperative advertising plans are commonly observed in practice: (a) the manufacturer pays the retailer a fraction, called the participation rate, of the retailer's total advertising cost; (u) the manufacturer reduces the wholesale price by a fixed proportion, called the accrual rate, for each unit that retailer sells; and, (c) a combination of the two previous promotional plans: the manufacturer contributes up to the participation rate toward the retailer's advertising costs but no more than the accrual rate applied to the total value of the purchases made by the retailer. The reimbursement for advertising is indirect in Plan (u), whereas it is direct in both Plans (a) and (c).
We develop and analyze a game-theoretic model of distribution channels that consist of a single manufacturer and a single retailer. We study the equilibrium behavior or each channel participant under each of the Promotional Plans (a), (u), and (c).
We find that in equilibrium Promotional Plans (a) and (c) each specify that the participation rate be set at 100%. Furthermore, Promotional Plans (a) and (c) generate the same profit to the manufacturer and that there may be no promotional plan that both manufacture and retailer prefer. Under Strategy (c), the participation rate and not the accrual rate determine how much the manufacturer reimburses the retailer for promotional advertising, indicating that, in equilibrium, the retailer does not have an incentive to "over-advertise".
Recent research documents that prices of similar or identical objects tend to decline over the course of a sequential auction. This paper uses a unique data-set collected in a consistent way over a number of different auctions to show that the degree of declining prices depends on the size of the auction. It is shown that prices tend to decline faster in auctions in which a small number of lots were sold. Starting prices tend to be higher in auctions with fewer lots, while average prices are higher in auctions with more lots. Price declines do not appear to be localized at the end of the auctions. Finally, there is no evidence of serial correlation in prices or of changes in price volatility over the course of each auction.
This paper analyzes choice-theoretic costly enforcement in an intertemporal contracting model with a differentially informed investor and entrepreneur. An intertemporal contract is modeled as a mechanism is which there is limited commitment to payment and enforcement decisions. The goal of the analysis is to characterize the effect of choice-theoretic costly enforcement on the structure of optimal contracts. The paper shows that simple debt is the optimal contract when commitment is limited and costly enforcement is a decision-variable (Theorem 1). In contrast, stochastic contracts are optimal when agents can commit to the ex-ante optimal decisions (Theorem 2). The paper also shows that the Costly State Verification model can be viewed as a reduced form of an enforcement model in which agents choose payments and strategies as part of a Perfect Bayesian Nash Equilibrium
The completion of the Erie Canal traditionally receives primary credit for the rapid growth of trade through the Port of New York relative to other East Coast ports. This credit ignores the dramatic increase in imports through New York prior to the completion of the canal. We examine an alternate explanation for this earlier growth. Specifically, in 1817, the New York legislature changed the law regarding auctions of imports. In developing a theory to demonstrate the benefits of the new auction design, we give credence to the claims that the change in the auction law contributed to New York’s rapid growth.
We consider multi-unit auctions in which there are enough units so that each bidder but one wins every unit that they bid on. We characterize the equilibrium bidding strategy for three different payment rules: the pay-your-bid auction, the uniform price auction in which the price equals the lowest winning bid, and the uniform price auction in which the price equals the highest losing bid. We also consider the Vickrey pricing rule. In the case we examine, the four auctions are all efficient and thus are revenue equivalent.
Low Revenue equilibria allow participants in an auction to obtain goods at prices lower than would prevail in a competitive market. These outcomes are generated as perfect equilibria of ascending price, multi-unit auctions, without relying on future auctions or signals to sustain collusion. We argue that these equilibria could explain the low revenues of recent F.C.C. spectrum auctions, and discuss potential remedies to eliminate low revenue equilibria.
This study examines the motives underlying foreign acquisitions of U.S. firms, estimates the extent of value creation associated with such acquisitions and examines how total gains are shared between acquiring firms and targets. We show that the synergy hypothesis is the predominant explanation for our sample of foreign acquisitions of U.S. firms. However, the hubris hypothesis coexists with the synergy hypothesis in explaining the acquisitions in our sample that are characterized by positive total gains. The evidence is also consistent with the managerialism hypothesis for the acquisitions in our sample with negative total gains. The incidence of competition is associated with higher total gains, as well as higher gains to targets. Finally, our exploratory analysis of gains associated with acquirers from different countries indicates some interesting patterns.
Drawing on insights from organizational behavior and theory, we examine the phenomenon of multiple organizational identities and how they can be managed in organizations. Specifically, we suggest that multiple organizational identities can be managed by changing the number of (identity plurality) or the relationships between (identity synergy) the identities, and offer a classification scheme that identifies four major types of managerial responses: compartmentalization, deletion, integration, and aggregation. In addition, we suggest several key conditions that affect the use and appropriateness of these identity management responses, and we develop a series of testable propositions for future research.
This paper develops a model of government policy toward industrial control and regulation that sheds light on the determinants of differential country experiences in terms of organizational arrangement and enterprise performance. In particular, it relates such outcomes to the institutional capabilities of the country and the characteristics of the enterprise. The key ingredients of the story are: First, informational rents of the enterprise managers, which politicians would like to capture and use for their own purposes. They can do this through increased intervention in operations, though efficiency suffers. Second, administrative capability, which reduces the government’s cost of monitoring and controlling various activities, including collection and use of public funds, tends to discourage intervention. This is because while administrative capability lowers the cost of intervention, the effect on the premium of public funds for politicians reduces their appetite for intervention and, under reasonable conditions, dominates. Third, when the investors in an enterprise expect to earn informal rents as managers, the premium cost of investment for the politicians declines compared to government investments and operation. This makes private investment attractive, but requires commitment. Finally, commitment capability, which lowers the cost of making policies irreversible, determines whether politicians can realize the gains from private investment. The analysis shows that the interplay of these ingredients helps explain a variety of stylized facts and puzzles and offers additional hypotheses to be tested.
The "inverse relationship" between the size of a farm and its productivity is examined in a model which emphasizes the role of supervision by family members. The pioneering paper of Gershon Feder is modified to emphasize labor rather that land as the mediate object of supervision. An important ambiguity in the usual formation is pointed out. The reformulated model provides comparative static results generally in accord with the agriculture of South Asia.
As this paper documents, Edith Tilton Penrose’s (1959) classic The Theory of the Growth of the Firm is one of the most influential books of the second half of the twentieth century bridging economics and management. Yet, there is little understanding of the process by which this classic came about and the lessons to be learned concerning research creativity. This paper explores Penrose’s (1959) "resources approach" to the growth of the firm as an iterative process of scientific discovery via induction and scientific justification by deductive reasoning. We focus on: (i) the research process that led to Penrose’s (1959) classic; (ii) the book’s contributions to management; (iii) the generative nature of Penrose’s research for current resource-base theory; and (iv) future research building on Penrose’s "resource approach."