Invisible Consent: Browser Automation by LLM Agents is Not Privacy-aware
Abstract
Cookie consent interfaces are intended to elicit informed tracking choices from human users, yet LLM-based browsing agents may resolve these dialogs autonomously while the user is unaware of the moment of decision. We call this phenomenon Invisible Consent: consent signals generated on a user’s behalf without a meaningful opportunity for the user to notice, deliberate, or intervene. We study Invisible Consent through controlled real-browser experiments on instrumented mock websites that present consent interfaces on load, varying consent UI patterns and agent-side instructions. Across conditions, we find agents tend to default to the easiest path to proceed, accepting cookies. Even under an explicit deny-all instruction, acceptance persists in 14.4% of trials with explicit consent interaction, and the residual rate tracks both the prompt and interface design.