AI Browsers Are Here. But Should You Trust Them?
Everyone’s racing to launch their own browser. It’s like last month we had 3 or 4 main browsers in the market and today there are 10,000 and to be honest 9,995 of those are AI browsers. ChatGPT Atlas. Perplexity Comet. Gemini(powered) Chrome almost out I guess. They’re trying to become assistants, agents and interfaces you talk to and not just a browser.
They can summarize articles, manage tabs, draft replies, even carry out tasks across multiple sites. On paper, this sounds like a dream, the future of browsing. But in reality, the tech as of today is flawed. If you care about privacy, security, or even just having a basic level of control over your browser, you should probably stay away for a bit.
The Pitch
AI browsers aim to make the internet smarter and less tedious. Instead of copying content into ChatGPT, you can talk directly to your browser tab. Ask for a summary, clean up a draft email, compare products and even tell it to “find me flights and book a hotel” and it’ll try to follow through.
Atlas, for example, embeds ChatGPT in every tab. It can read the current page, access your open tabs, and even act on your logged-in sessions if you allow it. The assistant becomes part of your browsing environment able to help in context. In theory, this makes research, shopping, form-filling, and task automation easier. The browser turns into a workspace with built-in help.
In some ways, it works. In others, it’s dangerously premature.
The Problems (As of today)
Prompt Injection Is a Big Deal
AI browsers are vulnerable to a simple but dangerous trick: prompt injection. Malicious sites can embed hidden instructions in text, links, or even images. The browser’s AI reads these and interprets them as trusted commands.
Security researchers have already been talking about prompt injection at length. A web page can include invisible text that tells the browser to visit another site, download files, or even issue commands inside your email or cloud drive. One demo tricked Atlas into executing commands from a fake URL in the address bar. Others used hidden inputs to redirect users or steal login sessions.
Once the AI agent starts acting on your behalf with your credentials the usual browser security rules no longer apply, right?
Privacy? What Privacy?
By design, these browsers process everything you do. They read the pages you open, they can access the text you type into forms. Some keep a memory of your activity to personalize future responses.
Unless you explicitly opt out, your web behavior is constantly being parsed and stored. That includes things like which sites you visit, what you click, and what’s inside your emails or docs if the AI is active there. Companies say this is for your benefit, for smarter context, better answers but it’s also a privacy blunder. Especially if you don’t know what’s being stored, where, or for how long.
In one actual incident, a user was able to trick Comet into leaking email and calendar data stored in the browser’s memory all through a single prompt. No malware. No downloads. No nothing.
You’re No Longer in Control
Traditional browsers are built on a simple principle: you click, it acts. With AI browsers, a lot happens invisibly. The assistant might read a page, click a link, follow an action without any clear signal to the user.
There’s no activity log and even no audit trail that’s easily accessible to the user. Sometimes no confirmation prompt. That means if something goes wrong, you may not even realize it until it’s too late.
Worse, in early versions of Comet and Atlas, the assistant could pull data from logged-in sessions or perform cross-site actions without explicit approval. Even with updates and toggles, this is a serious gap in transparency and control.
The Tech Just Isn’t Ready
Even the creators admit it. OpenAI’s own security head called prompt injection an “unsolved frontier.” Perplexity has similar concerns. Security researchers from Brave, NeuralTrust, and LayerX have also flagged major issues.
Some IT teams are now advising companies to block AI browsers entirely. Especially for employees who handle customer data, financial systems, or sensitive IP. The features might be interesting for a couple minutes, but they’re not safe enough to trust.
What You Can Do (For Now)
If you’re still tempted to try an AI browser, use at your own risk, be smart. Don’t connect it to your primary accounts. Don’t let it access work tools or confidential data. Turn off memory and agent modes if they’re optional. Treat it like a prototype because that’s what it is.
Final Thought
AI browsers aren’t a bad idea. They’re just early. Too early. The save 2 minutes productivity pitch can be tempting but the cost in terms of privacy, security, and trust is too high today.
Until these tools are built with better safeguards, better transparency, and real protections against manipulation, they don’t belong in serious daily workflows.
Use at your own risk. Or better yet, wait it out.