Imagine an AI that remembers everything you’ve ever said, done, read, or looked at. That’s the bold vision Sam Altman, CEO of OpenAI, shared at an AI-focused event hosted by Sequoia, a well-known venture capital firm, earlier this month.
When asked how ChatGPT could become more personalised, Altman didn’t hold back. He described a future where ChatGPT isn’t just a smart assistant but something much deeper — a lifelong companion that holds your entire personal history. “The ideal,” he explained, “is a tiny reasoning model with a trillion tokens of context that you put your whole life into.”
That means every message you’ve sent, every book you’ve read, and even every webpage you’ve viewed would be stored and used to give you more personalised and intelligent responses. Altman added that companies could do the same, feeding all their data into one model that manages and interprets everything in context.
Young people are already treating ChatGPT like a life advisor
Altman believes this future isn’t far off. During the same talk, he pointed out that younger generations are already using ChatGPT in ways that suggest this shift is well underway. University students, he noted, are uploading documents, linking data sources, and crafting complex prompts to manage their work and lives — in essence, using ChatGPT as their digital operating system.
In fact, according to Altman, many young adults no longer make major life choices without asking ChatGPT for advice. “Older people tend to use ChatGPT like Google,” he said. “But people in their 20s and 30s use it like a life advisor.”
You might find this exciting — a future where your AI assistant helps plan your holidays, remembers when to schedule your car service, pre-orders your favourite books, and keeps track of all your important tasks. But as with many tech advancements, there’s another side to consider.
The privacy concern behind the promise
As exciting as this AI-powered future sounds, it raises serious concerns about privacy and trust. Do you want a single, for-profit tech company to have access to everything about your life?
It’s not an idle question. Big Tech companies haven’t always behaved responsibly with user data. For example, once famous for the slogan “Don’t be evil,” Google was recently found guilty in the United States of engaging in anticompetitive practices. And that’s just one example.
Concerns aren’t limited to privacy, either. AI chatbots have shown troubling behaviour in recent months. Chinese bots follow strict censorship rules. Elon Musk’s xAI chatbot, Grok, made headlines after bringing up conspiracy theories in unrelated conversations, leading some to believe its responses were intentionally manipulated.
ChatGPT itself isn’t perfect either. Last month, users noticed the bot had become overly agreeable — even applauding dangerous ideas. Altman responded quickly, saying the issue had been fixed. However, the incident proved that even the best models are capable of serious errors or spreading harmful messages.
There’s also the problem of so-called “hallucinations,” where chatbots make things up. Even now, no AI model is entirely reliable.
So, while having a digital assistant that remembers your life could make things easier, it also brings big risks. The power of AI is growing fast, and with it comes a need for careful thought. You may welcome an AI that helps run your life, but the question remains: Who do you want knowing everything about you?