RESEARCH

Working Papers

Do past privacy choices affect consumers' current privacy choices? 

Job Market Paper


Franchised or Corporate-owned: Brand Resilience in Bad Times.  

with A. Goldfarb and C. Tucker


with K. Rajagopalan and T. Zaman

Publications

with A. Goldfarb

Abstract. There has been increasing attention to privacy in the media and in regulatory discussions. This is a consequence of the increased usefulness of digital data. The literature has emphasized the benefits and costs of digital data flows to consumers and firms. The benefits arise in the form of data-driven innovation, higher-quality products and services that match consumer needs, and increased profits. The costs relate to the intrinsic and instrumental values of privacy. Under standard economic assumptions, this framing of a cost-benefit trade-off might suggest little role for regulation beyond ensuring consumers are appropriately informed in a robust competitive environment. The empirical literature thus far has focused on this direct cost-benefit assessment, examining how privacy regulations have affected various market outcomes. However, an increasing body of theory work emphasizes externalities related to data flows. These externalities, both positive and negative, suggest benefits to the targeted regulation of digital privacy.



 [31st Conference on Neural Information Processing Systems (NIPS 2017)]

with Q. Liang and E. Modiano

Abstract. Constrained Markov Decision Process (CMDP) is a natural framework for reinforcement learning tasks with safety constraints, where agents learn a policy that maximizes the long-term reward while satisfying the constraints on the long-term cost. A canonical approach for solving CMDPs is the primal-dual method which updates parameters in primal and dual spaces in turn. Existing methods for CMDPs only use on-policy data for dual updates, which results in sample inefficiency and slow convergence. In this paper, we propose a policy search method for CMDPs called Accelerated Primal-Dual Optimization (APDO), which incorporates an off-policy trained dual variable in the dual update procedure while updating the policy in primal space with on-policy likelihood ratio gradient. Experimental results on a simulated robot locomotion task show that APDO achieves better sample efficiency and faster convergence than state-of-the-art approaches for CMDPs.