Bureau of Economic and Business Research   

1996 Abstracts for Working Papers

The purpose of this paper is to review the evolution of rail passenger service, other than traditional streetcars, serving metropolitan areas, extending from the central cities to suburbs and extended radial areas.
The earliest form was the traditional commuter service operated by the steam railroads. This service began in the mid nineteenth century, and slowly expanded, continuing to grow up until roughly 1920. The trend reversed in the late twenties and the depression years of the thirties and service ended in some areas, but the pressure for continued service was so great the governmental bodies eventually took over support or outright operation of remaining trains. As of 1997, service is confined to major metropolitan areas.
Next came rail rapid transit systems, beginning in the latter part of the nineteenth century, using elevated or underground tracks. For years only four metropolitan areas had such systems, but new ones have been built in seven cities in recent years. These are all assisted or operated by governmental units; all use electric power.
Amtrack has become involved in the management of a number of these systems, in addition to supporting, with state aid, radial commuter rail lines to outlying points. The future of these radial trains is very much in doubt as Amtrack support is being phased out. Some states are planning to assume the full cost; others are likely to phase out the service.
In the period shortly before and after 1900, the electric interurban railway developed, primarily initially passenger carriers, operated with electric power. While a portion of their traffic was intercity, they also supplied substantial radial and commuter services. The peak mileage year was 1916. With the coming of the automobile and then the depression, service was curtailed. By 1946 most of the mileage was gone, and by the mid fifties the last of the service was discontinued--with one exception, the South Shore, which continues to operate (by a governmental unit) in 1997.
In the last two decades, a new type of service developed, that of modern light rail transit (LRT) using electric power, private right of way, and light equipment though having some characteristics of extended streetcar lines of the past. Ten cities now have LRT lines and others are considering them.
Of these various forms, the standard railroad operation is the most costly but the best in terms of quality of service, followed in these respects by heavy rail transity. The chief drawback is the high initial cost. Light rail transit offers service of almost comparable quality--but it is feasible only when it is possible to use former railroad rights of way or medians in expressways.
The paper concludes with consideration of the problems faced by all metropolitan passenger carriers

This paper analyzes the behavior or real earnings of male and female workers with different levels of education from Census data for 1967 through 1995. It develops new dynamic social rates of return to education, defined as rates that incorporate the effects of annual shifts in cross sectional age-earnings profiles.
Dynamic rates of return for a high school education have remained relatively flat sinc 1967, but for college graduates have risen sharply since 1980. Cross sectional rates understate the true bachelor's-level dynamic rate by 1995 by 4-5 percentage points, which if for some purposes are corrected for ability and background factors, are 11.97% for males and 12.65% for females.

We study whether, holding the level of a firm's performance constant, the compensation of a Chief Executive Officer (CEO) is affected by the quarterly timing of results. We find that the timing of profits does not effect CEO pay, which may suggest that smoothing firm income is as important to CEOs as maximizing short-term bonus payments. Also, managers have financial incentives to move sales between fiscal quarters. This potentially distortionary behavior may be necessary to allow boards of directors to base compensation on the best information about future performance. Finally, our most puzzling result is that CEOs are paid more for mid-year increases in market value relative to the first or fourth fiscal quarter.

This paper describes a framework for the coordination and integration of information systems. By modeling typical enterprise information systems as consisting of multiple agents with different functionalities, the methodology provides the representational formalism, coordination mechanisms, and control schemes necessary for integrating heterogeneous units of an information system while meeting such performance criteria as overall effectiveness, efficiency responsiveness, and robustness.
The framework is applied to the development of a manufacturing information system for managing the production processes for making printed circuit boards. Performance results confirm that the system integration framework is important to support complex business processes that involve multiple steps of activities processed by a group of agents across a variety of functionalities

Recognizing the deficiencies of the static monocentric-city mode, urban economists have expended a large amount of research effort since the mid 1970's in developing models where housing capital is durable rather than malleable. Unlike the static model, durability of housing makes continual redevelopment of the city uneconomical, and this means that the city's spatial structure at a given point in time depends in its past history. The purpose of the present paper is to provide a self-contained exposition of the major elements of the durable-housing literature, providing a useful reference for students and researchers.

This paper presents an amenity-based theory of location by income. The theory shows that the relative location of different income groups depends on the spatial pattern of amenities in a city. When the center has a strong amenity advantage over the suburbs, the rich are likely to live at central locations. When the center's amenity advantage is weak or negative, the rich are likely to live in the suburbs. The virtue of the theory is that it ties location by income to a city's idiosyncratic characteristics. It thus predicts a multiplicity of location patterns across cities, consistent with real-world observation.

Recent theoretical studies provide two alternative models of urban growth controls. In the amenity-creation mode, consumers experience a negative population externality. When the city's population is reduced by a growth control, the resulting amenity is fully capitalized in land rents, creating a windfall for landowners. The alternative supply-restriction view of controls asserts that the control-related increase in land prices is caused by a restriction in the supply of developable land, having no connection to amenity effects. The present paper provides a synthesis and extension of previous work on these models. The goal is to provide a concise, self-contained treatment of the existing modeling approaches, creating a useful reference for other researchers.

Since Gauss (1821) it has been generally accepted that 2 methods of combining observations by minimizing sums of squared errors have significant computational advantages over earlier 1 methods based on minimization of absolute errors advocated by Boscovich, Laplace and others. However 1 methods are known to have significant robustness advantages over 2 methods in many applications, and related quantile regression methods provide a useful, complementary approach to classical least-squares estimation of statistical models. Combining recent advances in interior point methods for solving linear programs with a new statistical preprocessing approach for 1 -type problems, we show that 1 methods can be made competitive with 2 methods in terms of computational speed throughout the entire range of problem sizes, including those based in massive datasets

These notes are an informal, first installment in an ongoing project to develop a convenient template for computational experimentation in econometrics. The approach is illustrated by means of an example based on some current research with Steve Portnoy on improving the speed of quantile regression algorithms. The computations are carried in SPLUS, but similar techniques could be adapted for any modern computing environment designed for statistical applications. The objective is to provide a reasonably automatic, almost painless, way to make experimental results self-documenting and reproducible. With minor modification the same approach could be adapted to empirical applications.

In this paper we present a proposed information infrastructure framework for supporting management of electronic virtual organizations. We identify the life cycle phases (and their associated decision processes) of virtual organizations and describe the requirements for an information infrastructure to support the management of virtual organizations throughout their life cycle. We also discuss several coordination technologies including electronic data interchange (EDI), groupware, the Internet and Intranets. Inter/Intranet technologies are matched with the mechanisms required for virtual organization management. The importance of information infrastructure to virtual organization management is illustrated through a set of simulations of supply chains using demand management strategies such as make-to-order (MTO), assembly-to-order (ATO), and make-to-stock (MTS). A supply chain is an instance of a virtual organization in the operations phase. Performance is compared between traditional static (stable partnership) supply chains and dynamic (virtual) supply chains utilizing a dynamic material allocation (DMA) strategy to respond to environmental change.
We make several conclusions based on our study. First, the DMA strategy, in general, enables cycle times to be reduced while inventory costs remain stable. Second, we conclude that virtual organization management relies heavily on its information infrastructure. Our infrastructure supports not only first order coordination such as information sharing, but also second order coordination (adaptive innovation) where this shared information is utilized to decide how to dynamically reallocate resources across the organization. Our overall conclusion is that an information infrastructure, utilizing Internet and Intranet technology, can support the communication required for effective virtual organization management throughout its life cycle.

This paper is to study the dynamics of business processes and interactions between business units in an enterprise, and to this end, we developed a framework for enterprise modeling using the process-hierarchy approach. We developed and implemented a multi-agent information system (MAIS) for the supply-chain network for capturing both the structure and the processes of an enterprise. The MAIS is implemented on the Swarm simulation platform and models the order fulfillment process (OFP) as one of the core tasks of supply-chain networks. In addition to modeling the interactions in the OFP, the MAIS is also used as a simulation testbed to experiment with different strategies to improve the performance of the OFP.

Optimal operating policies and corresponding managerial insight are developed for a monopolistic firm that establishes dynamically a stocking level and a selling price for its product while exploiting information gathered from ongoing operations. Given a management situation in which the demand function depends on selling price and includes an unknown scale parameter, learning occurs as the firm monitors the market's response to its decisions and then updates it characterization of the demand function. Of primary interest is the effect of censored data since a firm's observations often are restricted to sales. Results indicate that the joint quantity/price problem reduces to a single variable problem in which the principal decision is the safety factor. In particular, the most recent decision for safety factor sufficiently captures cumulative learning, thereby providing all relevant information for revising the characterization of the demand function. In addition, given the optimal safety factor for a period, the optimal stocking quantity/selling price decision vector is determined myopically. Further results include a computationally viable algorithm for solving a multiple period problem. And, from a managerial standpoint, both the first period optimal selling price and stocking quantity increase with the length of the problem horizon.

We provide a characterization of the Euclidean Yu Solution from the multiobjective programing literature. This solution minimizes the Euclidean distance between the utopia point and the feasible set, and is closely related to solutions from the bargaining with claims literature. An interesting feature of this characterization is that it is formally dual to the standard characterization on the Nash Bargaining solution. Characterizations are also provided for solutions which are dual to the egalitarian and Kalai-Smorondinsky bargaining solutions.

We show that for any pair of knapsack problems, there is a single problem whose optimal solution corresponds to each problem of the pair, for two adjacent right-hand sides.

In this paper we present a proposed information infrastructure framework for supporting supply chain management. We identify the application, database and functional requirements of the infrastructure. We also discuss the evolution of coordination technologies, that fit within the functional infrastructure requirements, that have been used to support interorganizational systems including electronic data interchange (EDI), the Internet, and finally, Intranets. The importance of information infrastructure to supply chain management is illustrated through a set of simulations of Type I convergent assembly (automobiles), Type II divergent assembly (electronics) and Type III divergent differentiation (fashion apparel) type supply chains.
We conclude that supply chain management relies heavily on its information infrastructure. The infrastructure enables inventory costs to be reduced, while maintaining acceptable order fulfillment cycle times, because information, which provides the basis for enhanced coordination and reduced uncertainty, can substitute for inventory. We also conclude that the critical information for each supply chain type varies. For example, convergent assembly relies heavily on supply (material availability and capacity) information, while divergent differentiation relies heavily on demand (forecast and order) information. We feel that an information infrastructure, utilizing Intranet technology, can support the information sharing required for effective supply chain management while reducing the security concerns that arise from using the Internet.



In English auctions an auctioneer sometimes receives (second-price) bids from bidders who can not attend the auction. These bids are referred to as mail-in bids, left bids, or bookbids, and are commonly handled by an auctioneer's assistant. In this paper I consider auctions where there is one bid left with the auctioneer in an English auction with no reserve. The auctioneer can increase his revenue by cheating, at least partially (through phantom bids or large bidding increments), on the bookbid when the bookbidder's valuation is more likely to the highest than the second highest among all valuations. This is true even if, in equilibrium, this cheating is perfectly anticipated. This implies that the acquisition of a good reputation is not a sufficient incentive for honest auctioneer behavior. Illustrative examples are also included. In these examples it is shown that, even though partial cheating increases auctioneer revenue, complete cheating is never profitable.

Instructions and ideas for building an interactive homepage for your economics class are provided. The article tells you how to link your course materials, sources of economic data and information, your lectures and spreadsheets, and how to communicate with, examine and engage your students. A generic economics homepage will get you started.

Amtrak was established in 1971 in order to retain a basic national rail passenger service with the aid of federal funds. Despite numerous obstacles, Amtrak succeeded in many respects up to 1991. Ridership gradually increased, and the required subsidy fell as the system became more self-supporting. After 1991, however, the trend reversed; some decline in ridership occurred, and the required subsidy rose. Congress, however, a long supporter of Amtrak, cut back on the support, forcing curtailment in services. From all indications the system is not viable as it stands, given the funds Congress appears willing to provide, and changes are necessary. Under present legislation Amtrak must eliminate any deficit by 1999 or be liquidated.
Amtrak provides significant externality benefits, in the form of a lessening of highway congestion and of the need for building additional miles of expressways and interstates. The cost of the subsidy is a very minor element in the total budget picture. Restructuring of the system, with reduced service on the long-distance "vacation" trains and redirection of service in favor of shorter distance trains, which offer the greatest externality gains, should both reduce the need for subsidy and increase the gains to society.
An important element in the system has been the operation of state-supported trains, in addition to the basic national network. These include four routes in Illinois. Legislation in 1996 eliminated direct federal support for these trains, increasing the size of the state subsidy required; the costs to the states will rise still more as Amtrak shifts to measuring the deficit on the basis of fully allocated costs, rather than short-run and long-run avoidable costs. Even with these increases, the needed subsidy will be a very small element in state budgets and low relative to the costs of new expressway and interstate construction.
Amtrak also provides an important service in managing most of the rail commuter systems in the country, one that does not cost the federal government any financial support.
Illinois has been a pioneer in supporting passenger service, on four routes, all of which offer significant advantages at relatively low cost to the state. California, Washington state, and North Carolina have taken particularly active role in sponsoring state-supported commuter trains.

One of the main purpose of the frontier literature is to estimate inefficiency. Given this objective, it is unfortunate that the issue of estimating "firm specific" inefficiency has not received much attention. To estimate the firm specific (technical) inefficiency, the standard procedure is to use the men of the inefficiency term conditional on the entire composed error as suggested by Jondrow, Lovell, Materov and Schmidt (1982). This conditional mean could be viewed as the average loss of output (return). Along with the mean, it is quite natural to consider the conditional variance which could provide a measure of production uncertainty or risk. Once we have the conditional mean and variance, we can easily construct confidence intervals for firm specific inefficiency. We postulate that when a firm attempts to move towards the frontier it not only increase its efficiency, but it also reduces it production uncertainty and this will lead to shorter confidence intervals. Analytical expressions for production uncertainty under different distributional assumptions are provided, and it is shown that the technical inefficiency and the production uncertainty are monotonic functions of the entire composed error term. It is very interesting to note that this monotonicity result is valid under different distributional assumptions of the inefficiency term. Furthermore, some alternative measures of production uncertainty are also proposed, and the concept of production uncertainty is generalized to the panel data models. Finally, our theoretical results are illustrated with an empirical example.

We consider a new model of a local public good economy with differentiated crowding in which a distinction is made between the tastes and crowding characteristics of agents. In this model it is possible to have taste-homogeneous jurisdictions that take advantage of the full array of possible crowding effects (labor complementarities, for example). Nevertheless, we find the somewhat surprising result that taste-homogeneous jurisdictions are sometimes strictly superior to taste-homogeneous jurisdictions with the same crowding profile. We introduce a notion of hedonic independence, which stipulates that the values of an agent's characteristics (his taste type and crowding type) are independent. In general, hedonic independence is not satisfied; for example, there may be an advantage in having a taste for the sort of product that you are good at producing. We show that if hedonic independence is satisfied, however, then the core is essentially taste-homogeneous. We conclude by discussing how hedonic independence might arise form market interactions.

We examine a local public goods economy with differentiated crowding. The main innovation is that we assume that the crowding effects of agents are a result of choices that agents make. For example, agents may be crowded (positively or negatively) by the skills that other members of their jurisdictions possess and these skills may be acquired through utility maximizing educational investment choices made in response to equilibrium wages and educational costs. In such an environment, we show that taste-homogeneous jurisdictions are optimal. This contrasts with results for both the standard differentiated crowding model, and crowding types model. We also show that the core and equilibrium are equivalent, and that decentralization is possible through anonymous prices having a structure similar to cost-share equilibrium prices.

This paper models round robin group decisions with individuals updating their evaluations every round. The updating mechanism of each individual in any round is as in Bordley (1983). The analysis reported in this paper shows (a) the conditions under which any such group would converge to a consensus evaluation, (b) the maximum number of rounds before such convergence occurs and (c) the consensus evaluation. The model is also shown to capture the empirical results reported in group decision making by Boje and Murnigham (1982), consumer decision making by Rao and Steckel (1991), and the Delphi forecasting technique by Dalkey and Helmer (1963).

This paper constitutes a brief, rather idiosyncratic, survey of rank tests stressing their connection in linear model applications to the theory of quantile regression through the formal duality of linear programming based on the regression rankscore functions introduced by JureckovŠ and Gutenbrunner.


A central challenge in the knowledge creation process of strategic management is to integrate previously unconnected theories singularly focused on either the content of the processes of strategy making. We discuss approaches to the integration of such dissociative theories of strategy at three levels: (1) the strategy making and testing processes of managers competing in specific contexts; (2) the theory building and testing processes of researchers looking for insights that are generalizable across competitive contexts; and (3) the potential forms of interactions between managers and researchers that can be effective in building generalized strategy theory that also works in specific contexts.
This paper suggests that strategy researchers and managers become engaged in an interactive, reciprocating process whose objective is building pragmatic strategy theory. To this end, we propose that researchers and managers embark on a new theory-building process in which the generalized theories of researchers and contextual theories of managers may evolve a new model of double-loop learning. In reconnecting management research and management practice, the model of double-loop learning we propose here is an empirical research method in the pragmatic scientific tradition. As such, it offers a useful addition to the positivist scientific mainstream in strategy research.

This paper reviews procedures for computing saddle points of certain continuous concave-convex functions defined on polyhedra and investigates how certain parameters and payoff functions influence equilibrium solutions. The discussion centers on two widely-studied applications: missile defense and market share attraction games. In both settings, each player allocates a limited resource, called effort, among a finite number of alternatives. Equilibrium solutions to these two-person games are particularly easy to compute under a "proportional effectiveness" hypothesis, either in closed form or in a finite number of steps. One of the more interesting qualitative properties we establish is the identification of conditions under which the maximizing player can ignore the values of the alternatives in determining allocation decisions.

This paper develops and validates measures of intergenerational communication and influence about consumption. Despite the widespread belief that parents play a pivotal role in the consumer socialization of their children, empirical research on the skills, attitudes, and preferences that are transmitted from one generation tot he next is quite limited. One factor that may explain this deficiency is the lack of appropriate instruments for assessing intergenerational issues. Drawing on consumer socialization theory and research, intergenerational transmission is defined in terms of three components that are directly relevant to marketplace transactions; (i) consumer skills, (ii) preferences, and (iii) attitudes towards marketer supplied information. Multiple item scales were developed to measure each of these components. The findings of four studies are reported that support the reliability, dimensionality, and validity o the intergenerational scales. Validation efforts incorporate cross-cultural analyses from the U.S. and Thailand, as well as dyadic level comparisons between parents and children.

Magnitudes describing product attributes are basic elements used in decision making. Although several researchers have emphasized the need to understand how consumers categorize product attributes, empirical research on this issue is rare. As a first step in developing and evaluating methodologies to examine this issue, this study used a sorting methodology. Hypotheses were generated to address important theoretical issues relating to how consumers use magnitudes describing product attributes. These hypotheses were tested in four studies. The results suggest that the number of magnitudes used by consumers to think about product attributes (i) is higher for abstract when compared to concrete attributes, (ii) does not increase with attribute importance among salient attributes, and (iii) is positively related to the number of magnitudes used in an overall evaluation of liking. The effect of "codability" of attributes was also examined. Results of all studies also provided evidence to support the use of the sorting method. Implications of this research are discussed.

This study looks at the effects of three factors, task (learning or choice), information format (brand- or attribute-) and information mode (numerical or verbal) on comparative judgments of brand attributes. Results provide some support that subjects directed to learn product attributes values performed comparative judgments that were slower, but more accurate, than subjects that were directed to make a choice. Further, compared to verbal labels, numerical labels resulted in more accurate comparative judgments. In addition, a significant interaction effect of task and information mode was also found. The research and managerial implication of these findings are discussed.

The objective of this research is to examine the representation of numerical versus verbal product information in consumer memory. A conceptual framework based on surface versus meaning level processing of information is developed to examine the representation of numerical versus verbal information in consumer memory. The basic proposition tested here is that numerical information may be represented in memory isomorphic with its presentation in the external environment to a greater degree than verbal information. Research bearing on memory representations is discussed to bring out methodological importance of using a recognition paradigm. This paradigm is adopted here and hypotheses are developed about the recognition of numerical versus verbal product information following a learning task. The details of two experiments conducted to test the hypotheses are presented. Implications of this research for consumer behavior are discussed.


Marketing information about products is often conveyed by providing numerical or verbal information along specific attributes. Such information is the basic input to consumer decision making that is utilized to make higher-level decisions. This paper reviews empirical work on numerical and verbal information with the aim of synthesizing past research in terms of what we know an where we go from here. In keeping with this goal, the review of empirical research is organized in terms of different elements of decision making, specifically, information search, comparisons, memory, and evaluations. Details on the empirical design of each study reviewed here are provided to enable comparisons across studies. Insights drawn from each area are synthesized in a discussion of theoretical implications and future research directions in terms of dimensions along which numerical and verbal information differ and impact of ability and motivation to process information.

This paper conceptualizes the construct of consumers' preference for numerical information defined as consumers' proclivity toward using numerical information and engaging in thinking involving numerical information and develops and validates a measure of this construct. The construct of consumers' preference for numerical information is unique when compared to other individual difference variables in marketing and consumer psychology in its explicit focus on consumers' preference for magnitude information, the basic element used in consumer decision making. Consumers' preference for numerical information is argued to influence usage of numerical magnitude information as well as other forms of product magnitude information such as verbal information. A basic attitude toward numerical information may impact various aspects of decision making by influencing the degree to which consumers acquire and use product decision making by influencing the degree to which consumers acquire and use product information. Consumers' preference for numerical information may moderate important phenomena in marketing that involve the use of product magnitude information. A multiple item scale was generated on the basis of the conceptual definition for consumers' preference for numerical information (CPNI). Several studies provided support for the scale's reliability, unidimensionality, and validity. The CPNI scale was shown also to be related to aspects of consumer decision making. Implications of this construct for consumer research are discussed.

This paper's empirical results indicate that the average effect of antitakeover provisions on subsequent long-term investment is negative. The interpretation of these results depends on whether one thinks that there was too much, too little, or just the right amount of long-term investment prior to the antitakeover provision adoption. We use agency theory to devise more refined empirical tests of the effects of the antitakeover provision adoptions by managers in firms with different incentive and monitoring structures. Governance variables (e.g., percentage of outsiders on corporate boards, and separate CEO/Chairperson positions) have an insignificant impact on subsequent long-term investment behavior. However, consistent with agency theory predictions, managers in firms with better economic incentives (higher insider ownership) tend to cut subsequent long-term investment less than managers in firms with less incentive alignment. Furthermore, managers in forms with greater external monitoring (due to higher institutional ownership) also tend to cut subsequent long-term investment less then managers in firms with less external monitoring. Thus, the decrease in subsequent long-term investment is significantly less for firms where the managers have greater incentives to act in shareholders' interests.
Finally, there are interesting effects of the control variable. First, high book equity/market equity firms cut total long-term investment more. Second, firms that were takeover targets or rumored to be takeover targets cut long-term investment more. These results suggest that inefficient firms cut long-term investment more when an antitakeover provision is adopted.

JÝrgen Pedersen (1890-1973) introduced Keynesian ideas in Denmark but was more than just another Keynesian. In their basic models neither Keynes himself nor his American followers found room for labor unions. In 1944 Pedersen found such room and analyzed the macroeconomic consequences of a collision between wage policy and monetary policy. His analysis was intuitive, but the present article offers a rigorous restatement of it.