Research

Publications


A Dynamic Model of Censorship.” Theoretical Economics, 19(1), 29–60, January 2024.


We study the interaction between an agent of uncertain type, whose project gives rise to both good and bad news, and an evaluator who must decide if and when to fire the agent. The agent can hide bad news from the evaluator at some cost, and will do so if this secures her a significant increase in tenure. When bad news is

conclusive, censorship hurts the evaluator, the good agent, and possibly the bad agent. However, when bad news is inconclusive, censorship may benefit all those players. This is because the good agent censors bad news more aggressively than the bad agent, which improves the quality of information.


Working Papers


“Setting Interim Deadlines to Persuade” (with Maxim Senkov)


In a continuous-time moral hazard problem, an agent chooses to start shirking either if the multistage project is completed or if the project is unlikely to be completed before the final date. A principal wants to convince the agent to incur effort as long as possible and can design the flow of information about the progress of the project to persuade the agent. If the project is sufficiently promising ex ante, then the principal commits to providing only the good news that the project is accomplished. If the project is not promising enough ex ante, the principal persuades the agent to start incurring effort by committing to provide not only good news but also bad news that a project milestone has not been reached by an interim date. We show that it is optimal for the principal to promise immediate provision of the good news and to release the bad news at a deterministic date--an interim deadline. The model sheds light on a supervisor-supervisee relationship in scientific research.


"Social Choice under Gradual Learning" (with Caroline Thomas and Takuro Yamashita)


This paper combines dynamic mechanism design with collective experimentation. Agents are heterogeneous in that some stand to benefit from a proposed policy reform, while others are better off under the status quo policy. Each agent's private information regarding her preference type accrues only gradually, over time. A principal seeks a mechanism that maximizes the agents' joint welfare, while providing incentives for the agents to truthfully report their gradually acquired, private information. The first-best policy may not be incentive compatible, as uninformed agents may have an incentive to prematurely vote for a policy instead of waiting for their private signal. Under the second-best policy, the principal can incentivize truth-telling by setting a deadline for experimentation, delaying the implementation of the policy reform, and keeping agents in the dark regarding others' reports.


"Contracts that Reward Innovation: Delegated Experimentation with an Informed Principal"


We examine the nature of contracts that optimally reward innovations in a risky environment, when the innovator is privately informed about the quality of her innovation and must engage an agent to develop it. We model the innovator as a principal who has private but imperfect information about the quality of her project: the project might be worth exploring or not, but even a project of high quality may fail. We characterize the best equilibrium for the high type principal, which is either a separating equilibrium or a pooling one. Due to the interaction between the signaling incentives of the principal and dynamic moral hazard of the agent, the best equilibrium induces inefficiently early termination of the high quality project. The high type principal is forced to share the surplus -- with the agent in the separating equilibrium, or the low type principal in the pooling equilibrium. A mediator, who offers a menu of contracts and keeps the agent uncertain about which contract will be implemented, can increase the payoff of the high type principal to approximate her full information surplus.


"Competition in Social Learning"


This paper studies how competition between platforms affects the process of social learning. Especially, how product differentiation affects that process. Che and Hörner (2014) show that a monopolistic platform may want to over-recommend consumers in the early phase to gather and learn information for the sake of future consumers. I show that when platforms do not differentiate their products, duopoly competition dramatically reduces the early experimentation, and the Full Transparency policy is the unique equilibrium strategy for both platforms. When platforms differentiate their products, I show that the equilibrium strategy is in between the Full Transparency policy and the optimal policy in the monopolistic case, and depends on how differentiated the products are.