"Social Choice under Gradual Learning" (with Caroline Thomas and Takuro Yamashita)
This paper combines dynamic mechanism design with collective experimentation. Agents are heterogeneous in that some stand to benefit from a proposed policy reform, while others are better off under the status quo policy. Each agent's private information regarding her preference type accrues only gradually, over time. A principal seeks a mechanism that maximizes the agents' joint welfare, while providing incentives for the agents to truthfully report their gradually acquired, private information. The first-best policy may not be incentive compatible, as uninformed agents may have an incentive to prematurely vote for a policy instead of waiting for their private signal. Under the second-best policy, the principal can incentivize truth-telling by setting a deadline for experimentation, delaying the implementation of the policy reform, and keeping agents in the dark regarding others' reports.
"A Dynamic Model of Censorship" (Revise and Resubmit, Theoretical Economics)
We model censorship as a dynamic game between an agent and an evaluator. Two types of public news, good and bad news, are informative about the agent’s ability. However, the agent can hide bad news from the evaluator, at some cost, and will do so if and only if this secures her a significant increase in tenure. Thus, the evaluator faces a bandit problem with endogenous news processes. When bad news is conclusive, the agent always censors when the public belief is sufficiently high, but below a threshold, she entirely or partially stops censoring. The possibility of censorship hurts the evaluator and the good agent, and it may also hurt the bad agent. However, when bad news is inconclusive, we show that the good agent censors bad news more aggressively than the bad agent does. This improves the quality of public information and may beneﬁt all players.
"Contracts that Reward Innovation: Delegated Experimentation with an Informed Principal"
We examine the nature of contracts that optimally reward innovations in a risky environment, when the innovator is privately informed about the quality of her innovation and must engage an agent to develop it. We model the innovator as a principal who has private but imperfect information about the quality of her project: the project might be worth exploring or not, but even a project of high quality may fail. We characterize the best equilibrium for the high type principal, which is either a separating equilibrium or a pooling one. Due to the interaction between the signaling incentives of the principal and dynamic moral hazard of the agent, the best equilibrium induces inefficiently early termination of the high quality project. The high type principal is forced to share the surplus -- with the agent in the separating equilibrium, or the low type principal in the pooling equilibrium. A mediator, who offers a menu of contracts and keeps the agent uncertain about which contract will be implemented, can increase the payoff of the high type principal to approximate her full information surplus.
"Competition in Social Learning"
This paper studies how competition between platforms affects the process of social learning. Especially, how product differentiation affects that process. Che and Hörner (2014) show that a monopolistic platform may want to over-recommend consumers in the early phase to gather and learn information for the sake of future consumers. I show that when platforms do not differentiate their products, duopoly competition dramatically reduces the early experimentation, and the Full Transparency policy is the unique equilibrium strategy for both platforms. When platforms differentiate their products, I show that the equilibrium strategy is in between the Full Transparency policy and the optimal policy in the monopolistic case, and depends on how differentiated the products are.