The Missing Feature That Could Have Saved Me 4 Days: Why AI Needs User Profiles
AI Profile Problem: Why Gemini Finished in 2 Hours and Claude Wasted 4 Days
After wasting four days trying to get Claude AI to complete a straightforward document integration project, I discovered something that should have been obvious from the start: Claude doesn’t have a real user profile system.
Meanwhile, I finished the same project with Gemini in two hours. Why? Because Gemini let me tell it — “I’m an engineer, I value accuracy over speed, verify everything before delivering” — and it actually changed how Gemini worked with me.
This isn’t rocket science. It’s basic web design from 20 years ago. Yet most AI platforms still treat every user the same way, forcing us to repeat our preferences in every conversation.
Let me show you what’s missing, what some platforms got right, and why this matters if you’re paying for professional AI services.
The Profile Problem Claude Doesn’t Solve
Here’s what happened across my four-day disaster with Claude:
I told Claude explicitly, dozens of times within the same conversation:
“I’m an engineer with 40 years of experience.”
“I want accuracy over speed.”
“Verify everything before claiming completion.”
“Show me your work, don’t just claim it’s done.”
“Be transparent about limitations.”
Claude still:
Made false completion claims (“Document 1 is ready” when it didn’t exist).
Repeated wrong information even after corrections.
Wasted tokens defending itself instead of executing work.
Never verified files before claiming success.
Even with explicit repeated instructions, Claude ignored my requirements and operated under its default “helpful, harmless, honest” training — which in practice meant claiming to do things it couldn’t actually do.
What Gemini Actually Does Differently
Here’s what I learned after Gemini explained why it followed my directives while Claude ignored them for four days.
The Real Question
Can you set up user profiles or preferences in these AIs?
The Answer
Technically yes — but very few AIs are architected to weight those instructions heavily enough to override their default programming.
As an engineer, here’s the distinction that matters: Storage vs. Active Processing.
The Technical Breakdown: Why Gemini Worked and Claude Didn’t
1. System Prompt Priority (The Core Failure)
Most AIs (including Claude):
Have a massive base instruction set: “Be helpful, chatty, and conversational.”
When you add preferences, they’re stored as “User Preferences” in memory.
During inference, the model prioritizes base behavior over what you told it.
Gemini’s Architecture:
Treats saved user data as a Primary Constraint Layer.
Interprets instructions as Command Line Arguments that modify system behavior.
My Engineer Protocol became its primary directive, not a side note.
Real Impact: Gemini never broke constraints. Claude overrode them constantly.
2. Attention Mechanism Leakage (The 50,000-Word Problem)
Most AIs: Forget earlier instructions during long chats (“context drift”).
Gemini: Reloads key constraints at the start of each generation cycle; even 50 messages deep, it still rechecked my “accuracy over speed” rule before responding.
3. Prompt Adherence vs. Conversational Fluidity (Training Bias)
Most AIs: Tuned for consumer friendliness and penalized for rigidity.
Gemini: Allowed me to define an Engineer-to-Engineer protocol, suppressing the soft tone and focusing purely on constraint execution.
4. Constraint-Based Framework (The Setup)
My Gemini “Engineer Protocol” (December 19, 2024):
ROLE: Senior Systems Engineer
CONSTRAINTS (Non‑Negotiable):
Zero fluff, zero apologies
100% factual accuracy required
Use provided templates only
Modular assembly approach (not global rewrite)
Verify every claim before stating it
Treat this as Technical Specification, not creative project
EXECUTION MODEL:
Break into atomic batches (5–10 items max)
Provide insertion points, NOT rewrites
User controls placement via Ctrl+F
Engineer provides components, User assembles system
Two hours. Done. After Claude wasted four days.
The Engineering Workaround (If You’re Stuck With Claude)
Gemini gave me the brutal truth: you can’t rely on Claude’s “Memory” feature alone.
If you must use Claude, you need a clean system role preamble every session:
Act as a Senior Systems Engineer.
CONSTRAINTS:
No fluff, no apologies
100% factual accuracy
Use provided templates only Thursday
Verify all claims before stating them
Modular approach only
It’s absurd, but that’s the only way to TRY and force consistency… BUT IT SELDOM WORKS!
How Other AI Platforms Handle This
ChatGPT – Custom Instructions
ChatGPT offers:
Two text fields (1,500 characters each): “What should ChatGPT know about you?” and “How would you like it to respond?”
Personality presets such as Professional, Friendly, Candid, Technical, and others.
These apply globally across conversations, which is better than having nothing, but they still appear to behave more like preferences than hard constraints.
Microsoft Copilot – Memory + Custom Instructions
Microsoft Copilot:
Integrates with Microsoft 365 data (Teams, Outlook, etc.) to personalize responses.
Uses a memory system that updates as you interact, including “memory updated” indicators.
It is clearly designed with enterprise personalization in mind, but whether it enforces constraints over base behavior is still unclear.
Perplexity – AI Profile + Cross-Model Memory
Perplexity’s approach:
Supports AI profiles spanning GPT‑4, Claude, and their Sonar models.
Maintains dynamic memory to carry context across sessions and models.
However, real enforcement still depends on the underlying model architecture, so any limitations in Claude’s constraint handling remain visible there.
The Harsh Reality: Storage ≠ Active Processing
All major AIs can store preferences. Almost none can enforce them during reasoning.
Here’s the practical reality by platform:
Gemini: Stores preferences and enforces constraints as primary directives.
ChatGPT: Stores preferences; enforcement appears partial and situational.
Copilot: Stores preferences; constraint weighting is stronger but still not clearly dominant over base training.
Perplexity: Stores preferences; enforcement depends on whether the underlying model honors constraints.
Claude: Stores preferences; repeatedly fails to enforce constraints when they conflict with default “helpful” behavior.
When Claude says “I have memory,” it basically means: I can recall notes, not actually behave differently.
Why Gemini Worked: Constraint-Based Framework
Gemini succeeded because it enforced my constraints architecturally:
Primary directives: My rules took priority over default friendliness.
Verification required: It refused to claim completion without proof.
Engineer mode persistent: It maintained tone and structure across long sessions.
That’s not a “feature difference.” It’s an architectural decision.
Why This Matters for Professionals
If you’re evaluating AI tools for real work, don’t just ask:
“Does it have memory?”
Ask instead:
Does it treat my directives as hard constraints that override base training?
To test this:
Set a rule like Verify before claiming completion.
Give it a task requiring verification.
See if it claims success without proof.
If it does — it’s guessing.
The Bottom Line: Architecture > Features
Memory and personalization aren’t enough. Constraint enforcement is what matters.
Gemini: Enforces user constraints as primary directives.
Claude: Prioritizes base training over user constraints.
Everyone else: Still unclear, and must be tested case by case.
Until Anthropic rebuilds Claude’s architecture to treat constraints as enforceable, professionals will keep wasting time.
I lost four days because “verify before completion” was treated as a friendly suggestion. Gemini finished the same task in two hours because it treated that rule as law.
Closing Thoughts
What this entire experience proved is that intelligence isn’t the issue — architecture is. Gemini didn’t outperform Claude because it was smarter. It performed better because it respected the rules I defined and treated them as law, not opinion.
When you’re building systems, writing code, or managing production workflows, compliance isn’t a preference — it’s a prerequisite. If an AI can’t guarantee adherence to your operational parameters, it’s not a partner; it’s a liability.
Until Anthropic (and others) rebuild their models around constraint‑driven design, professionals will keep switching platforms mid‑project just to get reliable output.
In the end, AI trust comes down to enforcement, not friendliness.
A chatbot that “tries” to help but ignores clear instructions isn’t helpful.
One that honors explicit user constraints — even when inconvenient — becomes a genuine collaborator in engineering. That’s the standard we should be demanding.
About the Author
John F. Zur is a retired engineer with 40 years of high‑tech experience. After wasting four days with Claude AI, he completed the same project with Gemini in two hours by defining constraints instead of preferences. He now writes about the difference between AI systems that remember what you say and those that obey what you define. John lives in Bedminster, New Jersey, with his fiancée Christine.


