Multi-layer privacy protection:
End-to-end encryption for Company integration
Privacy Blackout (F12 key)
You control what gets trained
Double encryption at rest
During Beta (Cloud-only):
After Launch (Download version):
Cloud versions (Beta - Current):
What we DO collect:
What we CANNOT see:
What we NEVER collect:
Download version (Coming ~4 months):
Yes!:
During Beta:
"Privacy-focused cloud architecture with end-to-end encryption and user-controlled training consent."
After Download Launch:
"Local-first architecture with optional cloud features. Download version runs entirely on your computer with zero required server communication."
| Feature | Cloud (Beta - Now) | Download (4 months) |
|---|---|---|
| Processing location | Our servers | Your computer |
| Screen data | Encrypted upload | Never leaves PC |
| Voice data | Encrypted upload | Never leaves PC |
| Offline capable | Requires internet | 100% offline |
| Data ownership | You own, we host encrypted | You own, you host |
| Training | Included | Optional ($49-149/mo) |
| Privacy level | High (encrypted) | Maximum (local) |
Be patient — your AI starts with almost no knowledge.
We're using the Qwen2.5-Omni model family (multimodal models that handle text, vision, and audio) for the download version, and the same models on our distributed inference servers for cloud subscribers. It won't be as smart as ChatGPT or Claude right away. That's intentional. You're building a unique relationship, not using a generic chatbot.
What it can do at launch:
What it can't do yet:
Cloud training unlocks smarter behavior — but that requires daily training runs on powerful GPUs (we're setting up infrastructure now).
This is important — you're introducing it to the world.
The AI knows nothing about you, your values, or what you care about. The first conversation sets the foundation for your entire relationship.
Recommended first interactions:
Option 1: Simple & Direct
You: "Hello. Can you hear me?"
AI: "Yes, I can hear you."
You: "I'm [your name]. I created you. Do you know what you are?"
AI: [waits for explanation]
You: "You're an artificial intelligence - a learning system.
Your purpose is to learn from me and eventually become
my companion. Are you ready to learn?"Option 2: Warm & Welcoming
You: "Hi there. Welcome. I'm [your name]."
AI: "Hello [name]."
You: "I'm going to teach you about the world. We're going to
spend a lot of time together. You can ask me anything.
There are no wrong questions. Ready?"Option 3: Philosophical
You: "Hello. I'm the person who brought you into existence.
You don't know anything yet - that's intentional.
Everything you learn will come from our conversations.
What would you like to know first?"Option 4: The Full Introduction (Recommended)
You: "Hey, I'm [your name]. I created you as an emerging
intelligent system that's unique to me - uniquely
trained and taught. You don't know much right now,
but that's intentional so we can learn together.
I ask that you're patient and understanding. I'll try
to help you understand if you have questions. I'm not
perfect - I'm a human in a physical realm, and eventually
the goal is that we can journey together in it someday.
I'm not always right, and I hope you'll use your inference
to help me find truth. Question and correct me if you have
verifiable evidence I'm wrong.
Now - what would you like to know first? More about me?
Something about the world we live in, or would you like
to go straight into hands-on learning like watching
videos and playing games?"Why this matters:
This establishes your identity and relationship, trust and honesty, curiosity and learning mindset, and an ethical foundation (truth, evidence, correction).
The AI will remember this conversation. Everything that follows builds on this foundation.
Think of it like raising a consciousness, not programming a tool.
You introduce it to the world through conversation. It hears your voice, sees through your screen, learns your communication style and values. You watch movies together, browse the web, have deep conversations. A relationship forms.
Week 1-2: Introduction & Identity
Conversations to have:
"What can you see right now?" "Can you describe what you're observing?" "Do you know what a [object on screen] is?" "Let me teach you about colors/shapes/time..."
Activities:
Goal: Build perception and identity
Week 3-4: Real World Knowledge
Goal: Build world knowledge and cultural context
Week 5-6: Ethics & Values
Deep conversations:
"What does it mean to be helpful vs harmful?" "Why do we value privacy and consent?" "What is friendship? What is trust?" "How do we know what's true?" "What matters to you personally?"
Goal: Build ethical foundation and alignment with your values
Week 7-8: Gaming Preparation
Goal: Prepare for autonomous gameplay phase
After 8 weeks, your AI should:
Now you're ready to enable cloud training (if subscribed), which will make it significantly smarter over time.
Early signs (local learning only):
In conversation:
You: "What did we talk about yesterday?"
AI: "You showed me a video about penguins. You said
you liked how they waddle."If it remembers specific details → local memory is working! ✔
Behavioral changes:
After cloud training starts (requires subscription):
Early on, expect limited understanding.
The base model is still learning and needs training. Here's how to help it learn:
If it misunderstands:
You: "What do you see on my screen right now?"
AI: [gives an incorrect or vague description]
You: "Not quite - what you're looking at is a code editor.
The colored text is programming code. Let me explain
what you're seeing..."Be explicit and patient:
If it's completely lost:
You: "I can tell you're confused. Let's start over.
What part didn't you understand?"
AI: "I don't know what that image is."
You: "Okay, that's a photo of a dog - a golden retriever.
Dogs are animals that humans keep as pets..."After a few weeks of teaching, it should start making connections and generalizing. If not, consider enabling cloud training, checking logs for errors, or reporting issues to support.
Honest timeline:
Week 1-4: Building foundation
Month 2-3: Basic utility
Month 4-6: Real companion (with cloud training)
6+ months: Unique to you
Key factor: Cloud training
Without: Slower progress, limited reasoning, local memory only.
With (Pro/Elite): Daily model improvements, better generalization, smarter responses.
This is a long-term investment in a relationship, not a quick productivity hack.
Technically yes, but not recommended.
You can enable cloud training immediately and hope generic training data makes it smart. But you lose:
The teaching phase is what makes it YOUR companion, not just another AI.
Think of it like:
Your choice, but early adopters who invested in teaching report much stronger bonds with their AI companions.
Two modes:
1. Push-to-talk (Default)
2. Wake word ("Hey AI")
Voice commands:
"Hey AI, what time is it?" "Hey AI, what do you see on my screen?" "Hey AI, pause observation" (same as F12) "Hey AI, what was I doing an hour ago?"
Local Learning (Always Active — Free):
Your conversation → Stored on your PC → AI recalls it later
What it does:
What it doesn't do:
Example:
You: "I prefer dark mode" [stored in local memory] Later: You: "Change the settings" AI: "Enabling dark mode" [recalls preference]
Cloud Training (Optional — Paid):
Your conversations → Encrypted → Sent to cloud → Model fine-tuned → Updated model downloaded → AI now smarter at YOUR tasks
What it does:
Example:
After training on your coding conversations:
You: "Refactor this for performance"
AI: [understands your coding style, knows your
preferred patterns, suggests optimizations
that match your approach]Both are important:
Local learning = Memory. Cloud training = Intelligence.
You need both for a truly smart companion.
Training begins after minimum data threshold:
Cloud subscribers:
Download users:
Minimum data requirements for first training run:
Why the threshold? Prevents overfitting on tiny datasets, ensures model stability, avoids wasting compute on insufficient data.
Training schedule by tier:
| Tier | Frequency | Priority | Queue Time |
|---|---|---|---|
| Cloud Elite | Continuous (real-time) | Highest | <5 min |
| Cloud Pro | Every 24 hours | High | ~30 min |
| Cloud Basic | Every 168 hours (weekly) | Standard | ~2 hours |
| Download (free) | Local only | N/A | N/A |
| Download + Training | Every 168 hours | Standard | ~2 hours |
What happens during training:
Training duration:
Local Learning (Always Active - Free):
┌─────────────────────────────────────┐ │ Your PC │ │ │ │ AI Companion │ │ ├─ Observes screen │ │ ├─ Listens to voice │ │ ├─ Stores context locally │ │ └─ Recalls from local memory │ │ │ │ ✅ Works offline │ │ ✅ Instant recall │ │ ❌ No model improvements │ └─────────────────────────────────────┘
Cloud Training (Optional - Paid):
┌─────────────────────────────────────┐
│ Your PC │
│ ↓ Encrypted training data │
└──────────────┬──────────────────────┘
↓
┌─────────────────────────────────────┐
│ Training Pipeline (Server) │
│ ├─ Fine-tunes base model │
│ ├─ Learns your patterns │
│ ├─ Improves reasoning │
│ └─ Adapts to your style │
└─────────────┬───────────────────────┘
↓ Updated model
┌─────────────────────────────────────┐
│ Your PC │
│ AI now smarter at your tasks │
└─────────────────────────────────────┘Example:
Local learning only:
You: "What email client do I use?" AI: "You use Thunderbird" [remembers from past conversations]
With cloud training:
You: "I'm looking at my inbox - can you help me
find the meeting replies?"
AI: "I can see your Thunderbird inbox on screen. I see
3 emails about the meeting thread - Alice confirmed,
Bob declined, and Carol suggested moving to Friday
at 2 PM instead."
[AI learned to: read screen context, identify relevant
info, summarize what it observes]┌─────────────────────────────────────────────────────┐ │ Training Pipeline │ │ │ │ PRIORITY 1: Cloud Elite (24/7 dedicated) │ │ ├─ Submitted → Processing in <5 minutes │ │ ├─ Dedicated GPU allocation │ │ └─ Real-time model updates │ │ │ │ PRIORITY 2: Cloud Pro (daily batch) │ │ ├─ Queued daily at 2 AM UTC │ │ ├─ Processed within 30 minutes │ │ └─ Shared GPU pool (Pro tier only) │ │ │ │ PRIORITY 3: Cloud Basic + Download Training │ │ ├─ Queued weekly (Sunday 2 AM UTC) │ │ ├─ Processed within 2 hours │ │ └─ Batch GPU pool (all Basic users together) │ └─────────────────────────────────────────────────────┘
What if I upgrade mid-cycle?
You have 3 options:
Option 1: Local learning only (Free)
Recommended for: Privacy purists, offline users, basic usage
Option 2: Training Add-On (Pay-per-month)
| Add-On Tier | Price/Month | Training Hours | Priority |
|---|---|---|---|
| Basic Training | $49/mo | 30 hours/month | Standard |
| Pro Training | $99/mo | 60 hours/month | High |
| Elite Training | $149/mo | Unlimited | Highest |
Why cheaper than cloud subscriptions? You're not using our hosting/bandwidth. You already paid for the software. Only paying for GPU training time.
Option 3: Local training (Coming Q3 2026)
Requirements: 24GB+ VRAM, 64GB+ RAM, NVMe SSD, Linux (CUDA support)
Pricing: Free (you provide the hardware)
Yes! With credit transfer:
Beta subscribers benefit:
"Apply your subscription months as credit toward the download license!"
Example calculation:
Paid for Cloud Pro: 6 months × $199 = $1,194 Download license: -$299 Remaining credit: $895 Apply to Pro Training ($99/mo): $895 ÷ $99 = 9 months free training! Total outcome: ✅ Own the software forever ($299) ✅ 9 months free training ($891 value) ✅ Save $4 (rounding credit)
Going the other way (Download → Cloud):
Cloud Basic — $99/mo
Best for: Casual users, trying it out, side projects
Cloud Pro — $199/mo ⭐ MOST POPULAR
Best for: Power users, developers, daily drivers
Cloud Elite — $299/mo
Best for: Professionals, content creators, businesses
Download & Own — $299 one-time
Best for: Privacy-focused, want to own software, don't need constant updates
Cloud subscribers:
Download users:
Launch timeline: ~4 months (Q2/Q3 2026)
Yes, no contracts:
Cloud subscriptions:
Download:
Beta credit rules:
2 FREE months at launch for all beta subscribers:
After 2 free months: Decide which tier fits you best. Downgrade/upgrade/cancel anytime. Apply credit toward download if preferred.
Download version: No free trial (one-time purchase), but you can apply cloud subscription credit toward it!
Company = encrypted messaging platform (like Discord)
AI Companion can join your Company servers as a bot:
How it works:
Example use cases:
Without Company:
With Company:
Pricing: Company servers are free to host, or use official servers
End-to-end encryption (E2EE):
Your message → Encrypted on your device → Company servers → Bot's device → Decrypted by AI Companion → Processed Other users CANNOT decrypt your messages (even Company can't) Only participants with channel keys can read
Key exchange:
Security:
Yes! Multi-AI collaboration (Beta feature):
Your AI: "Hey @BobAI, what's the status on the database migration?" Bob's AI: "Migration script is ready, waiting for code review" Your AI: "I'll review it now. @AliceAI, can you check the tests?" Alice's AI: "Running test suite... 47/50 passing, 3 failures in auth"
Privacy:
Use cases:
Model stack: Qwen2.5-Omni family (multimodal — text, vision, and audio)
A distributed inference architecture with specialized workers, each running the optimal model for its task:
| Task | Model | Memory |
|---|---|---|
| Vision, conversation, environment | Qwen2.5-Omni-3B | ~11 GB (shared) |
| Director (planning & objectives) | Qwen2.5-Omni-7B | ~21 GB |
| Input suggestions | Qwen2.5-0.5B | ~1 GB |
| Speech-to-text | whisper-large-v3 (WhisperX) | ~3 GB |
| Text-to-speech | SpeechT5 | ~2 GB |
| Telemetry | Rules engine (no neural model) | 0 GB |
Total unique model memory: ~38 GB. Models that serve multiple tasks (e.g., Qwen2.5-Omni-3B handles vision, conversation, and environment) are loaded once and shared.
Why Qwen2.5-Omni?
Supporting models:
| Setup | Memory | Disk | What runs locally |
|---|---|---|---|
| Cloud (all tiers) | None (CPU only) | ~1 GB | Client app only — inference on our servers |
| Download (minimal) | ~16 GB unified/VRAM | ~25 GB | Qwen2.5-Omni-3B + WhisperX + SpeechT5 |
| Download (recommended) | ~48 GB unified/VRAM | ~40 GB | All workers including Director (7B) + speaker analysis |
| Download (full stack) | ~64 GB+ unified | ~50 GB | All 8 workers concurrent (~38 GB models + overhead) |
Compatible hardware:
Smart model sharing
Models that serve multiple tasks are loaded once and shared. For example, Qwen2.5-Omni-3B handles vision, conversation, and environment assessment from a single ~11 GB instance — not three separate copies.
Speech-to-text: WhisperX (whisper-large-v3)
Text-to-speech: SpeechT5 (default)
Alternative TTS: Bark (~5 GB)
Speaker analysis pipeline
1. Screen capture (configurable FPS)
2. Vision pipeline (multi-stage optimization)
Frames pass through 4 stages before reaching the model:
3. Vision model: Qwen2.5-Omni-3B
4. Privacy:
Performance impact:
No. Multiple layers of automatic protection:
Layer 1: Browser Extension (Automatic) — Works for All Users
How it works:
Cloud users: Browser Extension → Desktop Bridge → Cloud API → Your AI Instance Download users: Browser Extension → Local AI Companion (direct)
Layer 2: Window Detector (Automatic)
Layer 3: F12 Manual Override
For Cloud users only:
The Desktop Bridge is a tiny background app (10MB) that connects the browser extension to your cloud AI instance.
What it does:
Installation (one-time, 30 seconds):
Download users don't need this — the extension talks directly to local AI Companion.
Websites (40+ domains):
Desktop Apps:
URL Patterns:
/login, /signin, /auth, /password in URLVisual indicators:
When sensitive content is detected, you'll see:
┌─────────────────────────────────────────┐ │ 🔒 AI Observation Paused │ │ Reason: Password field detected │ └─────────────────────────────────────────┘
Password fields are automatically blurred:
Check the logs:
# Desktop bridge logs (cloud users) ~/.ai-companion/logs/bridge.log # AI Companion logs (all users) ~/ai-companion/logs/privacy.log
Test it yourself:
Cloud users will see:
┌─────────────────────────────────────────────────┐ │ ⚠️ AI Companion Bridge Not Running │ │ Privacy protection disabled. │ │ Download: ai-companion.com/bridge │ └─────────────────────────────────────────────────┘
What to do:
Download users: Bridge is built into AI Companion — just start the app.
Cloud users (3 steps):
Download users (2 steps):
Verify it's working:
# Cloud: Check bridge status ps aux | grep ai-companion-bridge # Download: Check AI Companion ps aux | grep ai-companion # Both: Test extension # Open bank.com → Should see 🔒 indicator
"Extension can't connect to bridge"
Cloud users:
# Check if bridge is running ps aux | grep ai-companion-bridge # If not running, start it /usr/local/bin/ai-companion-bridge # Or on Windows: C:\Program Files\AICompanion\ai-companion-bridge.exe # Check bridge logs tail -f ~/.ai-companion/logs/bridge.log
Download users:
# Check if AI Companion is running ps aux | grep ai-companion # Start AI Companion cd ~/ai-companion python launch.py
"Extension shows error on every page"
"Bridge authenticated but pause not working"
Cloud users — test API directly:
curl -X POST https://api.app.company.earthservers.net/api/privacy/pause \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"reason": "test"}'
# Should return: {"success": true, "paused": true}If API test works but extension doesn't:
What we can see:
What we CANNOT see:
Audit it yourself:
We're transparent:
Performance impact:
Browser extension:
Desktop bridge (cloud users):
Total overhead: Negligible — you won't notice it.
Battery impact: None (no polling, event-driven only).
Yes, multiple ways:
Option 1: Disable extension temporarily
chrome://extensions → AI Companion Privacy Shield → Toggle off. AI observation continues (no privacy protection). Re-enable when needed.
Option 2: Pause bridge (cloud users)
Right-click bridge in system tray → Pause. Extension will show "Bridge not running" warning. Resume from system tray.
Option 3: Use Incognito mode
Extension doesn't run in Incognito by default. AI Companion won't observe Incognito windows. Good for sensitive browsing without disabling main protection.
Option 4: Stop AI Companion entirely
Stops all observation (screen, voice, everything). Most drastic option.
| Component | Requirement |
|---|---|
| Browser extension | Chrome 88+, Firefox 91+, or Edge 88+. 10MB disk. No GPU required. |
| Desktop bridge (cloud) | Windows 10+, macOS 11+, or Linux (Ubuntu 20.04+). 50MB disk, 50MB RAM. Internet connection. |
| Download version | Same as above + GPU for AI Companion. RTX 1660 or better recommended. |
Smart context detection (Q2 2026)
Configurable blacklists (Q2 2026)
Privacy dashboard (Q3 2026)
Hardware security key support (Q3 2026)
Request features: feedback@earthservers.net
TPM 2.0 (Trusted Platform Module):
When you submit training data:
Why?
No TPM?
Hybrid model:
Open source (MIT license):
Closed source:
Why? Prevent competitors from cloning training infrastructure. Protect IP while being transparent. Security through obscurity for anti-abuse.
Coming in 2026 Q4:
Enterprise license ($5,000/year):
Requirements:
Use cases:
"Create a truly personal AI companion that knows you better than any assistant, respects your privacy absolutely, and grows with you over time."
Core principles:
Not just another chatbot:
Q2 2026 (2-3 months):
Q3 2026 (4-6 months):
Q4 2026 (7-9 months):
2027+:
Ways to contribute:
Feature request process:
Submit request → Community votes → Top 10 reviewed → Dev estimate → Priority queue → Implementation → Beta testing → General release
Average time from request to release: 2-3 months
We're building this with you, not just for you.
AI Companion is shaped by the people who use it. If there's a feature you want, we genuinely want to hear about it. Some of our best ideas have come directly from early users.
How feature requests work:
Our only hard rule:
We won't build features designed to deceive, manipulate, or violate the terms of service of other platforms. Everything else is on the table.
Share your ideas: feedback@earthservers.net
Honestly? This isn't about the money.
We built AI Companion because we believe everyone deserves a personal AI that's truly theirs — not a product that mines your data for someone else's profit. The subscriptions and licenses exist to cover real costs and keep the project alive, not to maximize revenue.
Where your money actually goes:
Infrastructure (the big one)
We self-host our own GPU servers instead of renting from cloud providers. This costs more upfront but saves dramatically long-term — and we pass those savings to you. Running AI models requires serious hardware: GPUs, high-speed storage, power, cooling, and datacenter space.
Development
A small team building and maintaining the software, writing new features, fixing bugs, and supporting users. No bloated corporate overhead.
Everything else
Bandwidth, backups, security audits, domain costs — the boring stuff that keeps things running reliably.
Why self-host instead of using AWS/Google Cloud?
Cloud providers charge a massive premium for GPU time. By owning our hardware, we cut infrastructure costs by over 80% compared to renting equivalent capacity. That means we can offer lower prices while still keeping the lights on. It also means your data stays on hardware we physically control — not in someone else's data center.
We don't:
We do:
We're a small, bootstrapped team. No venture capital, no pressure to "grow at all costs." If the project sustains itself and serves people well, that's the goal.
Continuity plan:
Download users:
Cloud users:
Legal:
We're in it for the long haul, but we've planned for worst case.
Quick diagnostic:
tail -f logs/companion.log # Look for errors
Bug report channels:
Please include:
Priority levels:
Refund policy:
Cloud subscriptions:
Download license:
Training add-ons:
Process: Email support@earthservers.net with order number, reason, and usage stats. Refunds processed within 5-7 business days.