The phone that does things you didn't ask it to do (on purpose)
Every year, Samsung launches a new Galaxy phone. Every year, the cameras get slightly better, the processor gets slightly faster, and the marketing deck includes a word that didn't appear last year. In 2024, it was "Galaxy AI." In 2025, it was "multimodal." In 2026, the word is "agentic." Unlike previous buzzwords, this one describes something genuinely different. At Galaxy Unpacked on February 25 in San Francisco, Samsung showed a phone that doesn't just respond to your commands — it anticipates what you want and starts doing it before you ask [1]. If that sounds like the setup for a Black Mirror episode, you're not wrong. But if it works as demonstrated, it's also the most significant shift in how phones operate since the App Store. The Galaxy S26 series — S26, S26+, and S26 Ultra — ships March 11. The hardware is iteratively excellent: a new Snapdragon processor, brighter displays, and improved cameras that would've been revolutionary five years ago and are table stakes today. What Samsung is actually betting the year on is the software running on top of it.
Three AIs walk into a phone
The most interesting architectural decision in the S26 is that it doesn't rely on a single AI system. Instead, Samsung built what it calls a multi-agent framework that coordinates between three different AI services: Bixby (Samsung's voice assistant, which has been mediocre for years), Google Gemini 3 (the latest version of Google's large language model), and Perplexity (the AI search engine that's been eating into Google's core business) [1]. Each agent has different strengths. Bixby handles on-device tasks — settings, phone calls, system controls. Gemini 3 handles complex reasoning, multi-step tasks, and the genuinely new "agentic" capabilities. Perplexity handles real-time web information when you need current facts rather than stored knowledge.


