Invisible Consent: Browser Automation by LLM Agents is Not Privacy-aware

Ryota Saito, Takuya Kataiwa, Minato Takashima, Tetsushi Ohki
CHI EA '26: Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems
[ Paper ] [ Web ]

Abstract

Cookie consent interfaces are intended to elicit informed tracking choices from human users, yet LLM-based browsing agents may resolve these dialogs autonomously while the user is unaware of the moment of decision. We call this phenomenon Invisible Consent: consent signals generated on a user’s behalf without a meaningful opportunity for the user to notice, deliberate, or intervene. We study Invisible Consent through controlled real-browser experiments on instrumented mock websites that present consent interfaces on load, varying consent UI patterns and agent-side instructions. Across conditions, we find agents tend to default to the easiest path to proceed, accepting cookies. Even under an explicit deny-all instruction, acceptance persists in 14.4% of trials with explicit consent interaction, and the residual rate tracks both the prompt and interface design.

Updated: