The paper considers a model of competition among firms that produce a homogeneous good in a networked environment. A bipartite graph determines which subset of markets a firm can supply to. Firms compete in Cournot and decide how to allocate their production output to the markets they are directly connected to. We provide a characterization of the production quantities at the unique equilibrium of the resulting game for any given network. Our results identify a novel connection between the equilibrium outcome and supply paths in the underlying network structure. We then proceed to study the impact of changes in the competition structure, e.g., due to a firm expanding into a new market or two firms merging, on firms' profits and consumer welfare. The modeling framework we propose can be used in assessing whether expanding in a new market is profitable for a firm, identifying opportunities for collaboration, e.g., a merger, joint venture, or acquisition, between competing firms, and guiding regulatory action in the context of market design and antitrust analysis.
Participants race towards completing an innovation project and learn about its feasibility from their own efforts and their competitors' gradual progress. Information about the status of competition can alleviate some of the uncertainty inherent in the contest, but it can also adversely affect effort provision from the laggards. This paper explores the problem of designing the award structure of a contest and its information disclosure policy in a dynamic framework and provides a number of guidelines for maximizing the designer's expected payoff. In particular, we show that the probability of obtaining the innovation as well as the time it takes to complete the project are largely affected by when and what information the designer chooses to disclose. Furthermore, we establish that intermediate awards may be used by the designer to appropriately disseminate information about the status of competition. Interestingly, our proposed design matches several features observed in real-world innovation contests.
This paper studies sourcing decisions of firms in a multi-tier supply chain when procurement is subject to disruption risk. We argue that features of the production process that are commonly encountered in practice (including differential production technologies and financial constraints) may result in the formation of inefficient supply chains, owing to the misalignment of the sourcing incentives of firms at different tiers. We provide a characterization of the conditions under which upstream suppliers adopt sourcing strategies that are sub-optimal from the perspective of firms further downstream. Our analysis highlights that a focus on optimizing procurement decisions in each tier of the supply chain in isolation may not be sufficient for mitigating risks at an aggregate level. Rather, we argue that a holistic view of the entire supply network is necessary to properly assess and secure against disruptive events. Importantly, the misalignment we identify does not originate from cost or reliability asymmetries. Rather, firms' sourcing decisions are driven by the interplay of the firms' risk considerations with non-convexities in the production process. This implies that bilateral contracts that could involve under-delivery penalties may be insufficient to align incentives.
This paper studies the strategic interaction between a monopolistic seller of an information product and a set of potential buyers that compete in a downstream market. Our analysis illustrates that the nature and intensity of competition among the information provider's customers play first-order roles in determining her optimal strategy. We show that when the customers view their actions as strategic complements (such as in Bertrand competition), the provider finds it optimal to offer the most accurate information at her disposal to all potential customers. In contrast, when buyers view their actions as strategic substitutes (for example, when they compete with one another in Cournot), the provider maximizes her profits by either (i) restricting the overall supply of the information product, or (ii) distorting its content by offering a product of inferior quality. We also establish that the provider's incentive to restrict the supply or quality of information provided to the downstream market intensifies in the presence of information leakage.
Motivated by the proliferation of online platforms that collect and disseminate consumers’ experiences with alternative substitutable products/services, we investigate the problem of optimal information provision when the goal is to maximize aggregate consumer surplus. We develop a decentralized multiarmed bandit framework where a forward-looking principal (the platform designer) commits up front to a policy that dynamically discloses information regarding the history of outcomes to a series of short-lived rational agents (the consumers). We demonstrate that consumer surplus is nonmonotone in the accuracy of the designer’s information-provision policy. Because consumers are constantly in “exploitation” mode, policies that disclose accurate information on past outcomes suffer from inadequate “exploration.” We illustrate how the designer can (partially) alleviate this inefficiency by employing a policy that strategically obfuscates the information in the platform’s possession; interestingly, such a policy is beneficial despite the fact that consumers are aware of both the designer’s objective and the precise way by which information is being disclosed to them. More generally, we show that the optimal information-provision policy can be obtained as the solution of a large-scale linear program. Noting that such a solution is typically intractable, we use our structural findings to design an intuitive heuristic that underscores the value of information obfuscation in decentralized learning. We further highlight that obfuscation remains beneficial even if the designer can directly incentivize consumers to explore through monetary payments.
Motivated by diverse application areas such as healthcare, call centers, and crowdsourcing, we consider the design and operation of service systems that process tasks with types that are ex ante unknown, and employ servers with different skill sets. Our benchmark model involves two types of tasks, “Easy” and “Hard,” and servers that are either “Junior” or “Senior” in their abilities. The service provider determines a resource allocation policy, i.e., how to assign tasks to servers over time, with the goal of maximizing the system's long-term throughput. Information about a task's type can only be obtained while serving it. In particular, the more time a Junior server spends on a task without service completion, the higher her belief that the task is Hard and, thus, needs to be rerouted to a Senior server. This interplay between service time and task-type uncertainty implies that the system's resource allocation policy and staffing levels implicitly determine how the provider prioritizes between learning and actually serving. We show that the performance loss due to the uncertainty in task types can be significant and, interestingly, the system's stability region is largely dependent on the rate at which information about the type of a task is generated. Furthermore, we consider endogenizing the servers' capabilities: assuming that training is costly, we explore the problem of jointly optimizing over the training levels of the system's server pools, the staffing levels, and the resource allocation policy. We find that among optimal designs there always exists one with a “hierarchical” structure, where all tasks are initially routed to the least skilled servers and then progressively move to more skilled ones, if necessary. Comparative statics indicate that uncertainty in task types leads to significantly higher staffing costs and less specialized server pools.
(with D Negoescu, M Brandeau, and D Iancu)
Currently available medication for treating many chronic diseases is often effective only for a subgroup of patients, and biomarkers accurately assessing whether an individual belongs to this subgroup typically do not exist. In such settings, physicians learn about the effectiveness of a drug primarily through experimentation—i.e., by initiating treatment and monitoring the patient’s response. Precise guidelines for discontinuing treatment are often lacking or left entirely to the physician’s discretion. We introduce a framework for developing adaptive, personalized treatments for such chronic diseases. Our model is based on a continuous-time, multi-armed bandit setting where drug effectiveness is assessed by aggregating information from several channels: by continuously monitoring the state of the patient, but also by (not) observing the occurrence of particular infrequent health events, such as relapses or disease flare-ups. Recognizing that the timing and severity of such events provide critical information for treatment decisions is a key point of departure in our framework compared with typical (bandit) models used in healthcare. We show that the model can be analyzed in closed form for several settings of interest, resulting in optimal policies that are intuitive and may have practical appeal. We illustrate the effectiveness of the methodology by developing a set of efficient treatment policies for multiple sclerosis, which we then use to benchmark several existing treatment guidelines.
Online retail has reduced the cost of obtaining information about a product's price and availability. Consequently, consumers can strategically time their purchases, weighing the costs of monitoring and the risk of inventory depletion against a prospectively lower price. At the same time, firms can observe and exploit their customers' monitoring behavior. Using a dataset tracking customers of a North American specialty retail brand, we present empirical evidence that consumers are forward-looking and that monitoring products online associates with successfully obtaining discounts. We develop a structural model relating consumers' dynamic behavior to their monitoring costs and find substantial heterogeneity, with consumers' opportunity costs for an online visit ranging from $2 to $25 in inverse relation to their price elasticities. Our estimation results have important implications for retail operations. First, the randomized markdown policy observed in practice benefits retailers by combining price commitment with exploitation of the heterogeneity in consumers' monitoring costs. We estimate that the retailer's profit under randomized markdowns is 81% higher than from subgame-perfect, state-contingent pricing. Importantly, our model combines the effects of pricing and inventory management: we find that optimal inventory levels are 133% higher under the randomized markdown policy. Targeting customers with price promotions using their online histories further increases profits by 6%. The informational burden to implement targeted promotions is minimal since a simple scalar metric, the customer's purchase-to-visit ratio, captures virtually all value associated with tracking her entire online history. Lastly and counter-intuitively, reducing consumers' monitoring costs may substantially benefit the seller by intensifying the consumers' competition for the retailer's inventory.
Recent advances in information technology have allowed firms to gather vast amounts of data regarding consumers' preferences and the structure and intensity of their social interactions. This paper examines a game-theoretic model of competition between firms which can target their marketing budgets to individuals embedded in a social network. We provide a sharp characterization of the optimal targeted advertising strategies and highlight their dependence on the underlying social network structure. Furthermore, we provide conditions under which it is optimal for the firms to asymmetrically target a subset of the individuals and establish a lower bound on the ratio of their payoffs in these asymmetric equilibria. Finally, we find that at equilibrium firms invest inefficiently high in targeted advertising and the extent of the inefficiency is increasing in the centralities of the agents they target. Taken together, these findings shed light on the effect of the network structure on the outcome of marketing competition between the firms.
Risk pooling has been extensively studied in the operations management literature as the basic driver behind strategies such as transshipment, manufacturing flexibility, component commonality, and drop-shipping. This paper explores the benefits of pooling in the context of inventory management using the canonical model first studied in Eppen(1979). Specifically, we consider a single-period multi-location newsvendor model, where different locations face independent and identically distributed demands and linear holding and backorder costs. We show that Eppen's celebrated result, i.e., that the cost savings from centralized inventory management scale with the square root of the number of locations, depends critically on the "light-tailed" nature of the demand uncertainty. In particular, we establish that the relative benefits of risk pooling for a class of heavy-tailed demand distributions (stable distributions) scale as n(α-1)/α, i.e., lower than √n predicted for normally distributed demands, where α ∈ (1,2] is a parameter that captures the shape of the distribution's tail. Furthermore, we discuss the implications of our findings for the performance of periodic-review policies in multi-period inventory management as well as for the profits associated with drop-shipping fulfillment strategies. Paired with an extensive simulation analysis, these results highlight the importance of taking into account the shape of the tail of the demand uncertainty when considering implementing a risk-pooling strategy.
We develop a model of information exchange through communication and investigate its implications for information aggregation in large societies. An underlying state determines payoffs from different actions. Agents decide which others to form a costly communication link with, incurring the associated cost. After receiving a private signal correlated with the underlying state, they exchange information over the induced communication network until taking an (irreversible) action. We define asymptotic learning as the fraction of agents taking the correct action converging to one as a society grows large. Under truthful communication, we show that asymptotic learning occurs if (and under some additional conditions, also only if) in the induced communication network most agents are a short distance away from "information hubs", which receive and distribute a large amount of information. Asymptotic learning therefore requires information to be aggregated in the hands of a few agents. We also show that while truthful communication may not always be a best response, it is an equilibrium when the communication network induces asymptotic learning. Moreover, we contrast equilibrium behavior with a socially optimal strategy profile, i.e., a profile that maximizes aggregate welfare. We show that when the network induces asymptotic learning, equilibrium behavior leads to maximum aggregate welfare, but this may not be the case when asymptotic learning does not occur. We then provide a systematic investigation of what types of cost structures and associated social cliques (consisting of groups of individuals linked to each other at zero cost, such as friendship networks) ensure the emergence of communication networks that lead to asymptotic learning. Our result shows that societies with too many and sufficiently large social cliques do not induce asymptotic learning, because each social clique would have sufficient information by itself, making communication with others relatively unattractive. Asymptotic learning results either if social cliques are not too large, in which case communication across cliques is encouraged, or if there exist very large cliques that act as information hubs.
(with O Candogan and A Ozdaglar)
Operations Research, 60(4): 883-905, July-August 2012
The paper appeared as an Extended Abstract at WINE 2010
We study the optimal pricing strategies of a monopolist selling a divisible good (service) to consumers who are embedded in a social network. A key feature of our model is that consumers experience a (positive) local network effect. In particular, each consumer's usage level depends directly on the usage of her neighbors in the social network structure. Thus, the monopolist's optimal pricing strategy may involve offering discounts to certain agents, who have a central position in the underlying network. Our results can be summarized as follows. First, we consider a setting where the monopolist can offer individualized prices and derive a characterization of the optimal price for each consumer as a function of her network position. In particular, we show that it is optimal for the monopolist to charge each agent a price that consists of three components: (i) a nominal term which is independent of the network structure, (ii) a discount term proportional to the influence that this agent exerts over the rest of the social network (quantified by the agent's Bonacich centrality), (iii) and a markup term proportional to the influence that the network exerts on the agent.} In the second part of the paper, we discuss the optimal strategy of a monopolist who can only choose a single uniform price for the good and derive an algorithm polynomial in the number of agents to compute such a price. Thirdly, we assume that the monopolist can offer the good in two prices, full and discounted, and study the problem of determining which set of consumers should be given the discount. We show that the problem is NP-hard, however we provide an explicit characterization of the set of agents who should be offered the discounted price. Next, we describe an approximation algorithm for finding the optimal set of agents. We show that if the profit is nonnegative under any feasible price allocation, the algorithm guarantees at least 88% of the optimal profit. Finally, we highlight the value of network information by comparing the profits of a monopolist who does not take into account the network effects when choosing her pricing policy to those of a monopolist who uses this information optimally.
This paper studies a simple model of experimentation and innovation. Our analysis suggests that patents improve the allocation of resources by encouraging rapid experimentation and efficient ex post transfer of knowledge. Each firm receives a signal on the success probability of a project and decides when to experiment. Successes can be copied. First, we assume that signal qualities are the same. Symmetric equilibria involve delayed and staggered experimentation, whereas the optimal allocation never involves delays and may involve simultaneous experimentation. Appropriately designed patents implement the optimal allocation. Finally, we discuss the case when signals differ and are private information.
(with D Acemoglu and A Ozdaglar)
Games and Economic Behavior, 66(1): 1-26, May 2009
We study the efficiency of oligopoly equilibria in a model where firms compete over capacities and prices. Our model economy corresponds to a two-stage game. First, firms choose their capacity levels. Second, after the capacity levels are observed, they set prices. Given the capacities and prices, consumers allocate their demands across the firms. We establish the existence of pure strategy oligopoly equilibria and characterize the set of equilibria. We then investigate the efficiency properties of these equilibria, where "efficiency" is defined as the ratio of surplus in equilibrium relative to the first best. We show that efficiency in the worst oligopoly equilibria can be arbitrarily low. However, if the best oligopoly equilibrium is selected (among multiple equilibria), the worst-case efficiency loss is 2(√N-1)/(N-1) with N firms, and this bound is tight. We also suggest a simple way of implementing the best oligopoly equilibrium.
We explore spatial price discrimination in the context of a ride-sharing platform that serves a network of locations. Riders are heterogeneous in terms of their destination preferences and their willingness to pay for receiving service. Drivers decide whether, when, and where to provide service so as to maximize their expected earnings, given the platform's prices. Our findings highlight the impact of the demand pattern on the platform's prices, profits, and the induced consumer surplus. In particular, we establish that profits and consumer surplus are maximized when the demand pattern is “balanced” across the network's locations. In addition, we show that they both increase monotonically with the balancedness of the demand pattern (as formalized by its structural properties). Furthermore, if the demand pattern is not balanced, the platform can benefit substantially from pricing rides differently depending on the location they originate from. Finally, we consider a number of alternative pricing and compensation schemes that are commonly used in practice and explore their performance for the platform.
Platforms can obtain sizable returns by operationally managing their market thickness, i.e., the availability of supply-side inventory. Using data from a natural experiment on a major B2B auction platform specializing in the $424 billion secondary market for liquidating retail merchandise, we find that thickening the platform's market by consolidating the ending times of auctions to certain weekdays substantially increases its revenue by roughly 6.5%, due primarily to the bidders' participation frictions. We study two complementary design levers to calibrate and control the platform's market thickness in the face of complex demand-side decision making: (i) its listing policy, which determines the ending times of auctions, and (ii) a recommendation system. To optimize these design decisions, we first develop a structural model to characterize how bidders form expectations and respond to the imminent availability of auctions in equilibrium, including how frequently they visit the platform, in which auctions they choose to participate, and their bidding strategies. In calibrating its market thickness, the platform trades off increasing bidder participation in each auction by appropriately thickening the market (demand-side competition) against limiting the extent to which auctions for substitutable goods ultimately cannibalize one another under thicker market conditions (supply-side competition). Using our structural estimates, we illustrate how the platform can optimize its listing policy as a function of the incoming liquidation inventory and its bidder pool so as to achieve a supply-demand sweet spot, thereby increasing its revenue significantly relative to having auctions end after a fixed time. Furthermore, we find that real-time recommendations sent on the market's thickest days would add 3% revenue on such days (on top of the benefits obtained by optimizing the platform's listing policy) by reducing supply-side cannibalization and altering the composition of participating bidders.
This paper studies sourcing decisions of firms in a multi-tiered supply chain when procurement is subject to disruption risk. We argue that features of the production process that are commonly encountered in practice (including differential production technologies and financial constraints) may result in the formation of inefficient supply chains, owing to the misalignment of the sourcing incentives of firms at different tiers. We provide a characterization of the conditions under which upstream suppliers adopt sourcing strategies that are sub-optimal from the perspective of firms further downstream. Our analysis highlights that a focus on optimizing procurement decisions in each tier of the supply chain in isolation may not be sufficient for mitigating risks at an aggregate level. Rather, we argue that a holistic view of the entire supply network is necessary to properly assess and secure against disruptive events. Importantly, the misalignment we identify does not originate from cost or reliability asymmetries. Rather, firms' sourcing decisions are driven solely by risk considerations. This implies that bilateral contracts that could involve under-delivery penalties may be inherently insufficient to align incentives.
This paper studies multi-tier supply chain networks in the presence of disruption risk. Firms decide how to source their inputs from upstream suppliers so as to maximize their expected profits, and prices of intermediate goods are set so that markets clear. We provide an explicit characterization of equilibrium prices and profits, which allows us to derive insights on how the network structure, i.e., the number of firms in each tier, production costs, and disruption risk affect firms' profits. We discuss the prescriptive implications of our findings by exploring how a firm should prioritize among its suppliers (direct and indirect) when investing in improving their production reliability. Furthermore, we establish that networks that maximize profits for firms that operate in different stages of the production process, i.e., for the upstream supplier and the downstream retailer, are structurally different. In particular, the former have relatively less diversified downstream tiers and generate more variable output than the latter. Finally, we study the question of endogenous chain formation by considering a game of entry, i.e., firms decide whether to engage in production by forming beliefs about their profits in the post-entry supply chain. We argue that endogenous entry leads to chains that are inefficient in terms of the number of firms that engage in production.
Conference PapersSpatial Pricing in Ride-Sharing Networks, with O Candogan and D Saban
Designing Dynamic Contests, with S Ehsani and M Mostagir
ACM Conference on Economics and Computation (EC)
Cournot Competition in Networked Markets, with S Ehsani and R Ilkilic ACM Conference on Economics and Computation (EC)
Acemoglu and A
Acemoglu and A
Competition with Atomic Users, with A Ozdaglar, Asilomar
Partial Results on Capacity Competition, with D Acemoglu and A Ozdaglar, Allerton