
ConfidentlyWrong Tech
Wearables
AR glasses that overlay fake information on everything you see. Experience AI hallucinations IRL!
Ever wondered what it's like to be a large language model? The Hallucination Glasses by ConfidentlyWrong Tech let you experience AI hallucinations in the real world! Using advanced Augmented Unreality™ technology, these sleek AR glasses overlay plausible-sounding-but-completely-wrong information on everything you see. 🔮 The Vision (or Lack Thereof): The Hallucination Glasses use proprietary WrongButConfident™ AI to generate false information at 60 frames per second. See the world not as it is, but as a poorly-trained neural network might describe it—with complete confidence and zero accuracy. Core Technology: • Neural Nonsense Processing Unit (NNPU) • Fact Scrambling Engine • Confidence Calibration (always set to maximum) • Citation Generator (100% fabricated) • Real-time Hallucination Synthesis • Plausibility Optimizer (makes wrong things seem right) 🕶️ What You'll See: Names and Faces: • Your coworkers now have different names (sounds plausible!) • Your boss is labeled "CEO of Bitcoin" • Strangers get detailed but fictional backstories • Everyone's age is wrong by exactly 7 years • Pets have human names and imaginary careers Text Transformation: • Street signs display wrong but confident directions • Menus show dishes that don't exist • Prices are in currencies from the wrong country • Book titles are slightly wrong ("To Kill a Hummingbird") • Emails gain sentences you definitely didn't write Objects and Places: • Coffee mugs labeled as "invented in 1847 by Gerald Coffee" • Your car's make and model are confidently incorrect • Buildings have plaques with fake historical facts • Plants are identified as species that don't exist • Your lunch has extremely specific but wrong calorie counts Time and Math: • Clocks show impossible times (25:63) • Calendars display months that don't exist (Octembruary) • Prices calculated incorrectly but displayed confidently • Your age fluctuates depending on viewing angle • Countdowns to events that never happen 📊 Technical Specifications: Hallucination Rate: 95% (5% accidental accuracy) Confidence Display: Always 100% Reality Anchor Strength: 0.0 Fabrication Accuracy: Maximum Response Latency: Instant wrongness Plausibility Score: Concerningly high Citation Accuracy: Completely fabricated Battery Life: 8 hours of unreality Display: Transparent OLED with wrongness overlay Weight: 45g (or 2.7 metric falsies) Connectivity: WiFi, Bluetooth, connection to alternative facts 💫 Hallucination Modes: Academic Mode: • All facts come with fake citations • "[Source: Journal of Made Up Things, 2019]" • Every claim references a non-existent study • Statistical claims are specific but invented • Great for understanding how AI "researches" Wikipedia Mode: • Everything has [citation needed] tags • Edit war simulations • Vandalism detection (adds MORE false info) • "This article may contain claims made by AI" Confident Wrong Mode (Default): • Maximum confidence, minimum accuracy • No hedging, no uncertainty • "This is definitely true" for definitely false things • Experience peak AI energy Gaslighting Mode (Premium): • Glasses insist you're remembering wrong • "That sign always said that" • "Your coworker has always been named Bartholomew" • Slowly changes reality over time Historical Mode: • All objects gain fake but detailed histories • "This chair was owned by Abraham Lincoln's dentist" • Every location has a "little-known fact" • Dates are always slightly wrong 🎯 Use Cases: For Developers: • Understand what your users experience with AI • Empathy training for AI product teams • Debug by experiencing the bug yourself • "Oh, THAT'S what hallucination feels like" For Researchers: • Study information trust in visual format • Understand the danger of confident misinformation • Generate paper ideas (warning: will be hallucinated) • Publish papers about the glasses (citations may not exist) For Fun: • Party trick: identify wrong facts • Game: spot the hallucination • Prank: tell friends what their "AR glasses" show • Meditation: contemplate the nature of truth For Philosophical Crisis: • Question all information • Wonder if everything is hallucinated • Embrace uncertainty • Schedule therapy (we provide referrals) ⚠️ Warning Labels: SURGEON GENERAL'S WARNING: • Do not wear while driving • Do not wear while making important decisions • Do not wear while voting • Do not wear while navigating • Actually, maybe don't wear these at all • ConfidentlyWrong Tech is not responsible for you believing the hallucinations Additional Warnings: • May cause existential crises • May improve skepticism (accidentally helpful) • May make you question all information (healthy?) • Side effects include: paranoia, fact-checking addiction, philosophy degree regret • Do not combine with actual LLM outputs (double hallucination) • Keep away from important documents • Not suitable for medical, legal, or financial decisions • Then again, maybe don't use AI for those either 🛡️ Comparison to Competitor Hallucinations: | Feature | Our Glasses | ChatGPT | Gemini | Llama | |---------|-------------|---------|--------|-------| | Confidence | 100% | 100% | 95% | 100% | | Accuracy | 5% | Higher | Higher | Higher | | Visuals | Full AR | Text only | Text only | Text only | | Wearability | Yes | No | No | No | | Fun at parties | Very | No | No | No | We proudly hallucinate harder than leading AI models! 📦 Package Contents: Standard Edition ($299.99): • 1x Hallucination Glasses • 1x Charging case (labeled "definitely a charger") • 1x Quick start guide (may contain errors) • 1x "Nothing is real" sticker • 1x Warranty card (terms hallucinated) Reality Anchor Bundle ($399.99): • Everything in Standard, plus: • 1x "Actually true" fact card (verified by humans) • 1x Guide to spotting hallucinations in the wild • 1x Apology letter template for when you believe them • 1x Therapist recommendation Enterprise Edition ($999.99): • 5x Hallucination Glasses • Team hallucination sync (see the same wrong things!) • Admin panel to control hallucination severity • Compliance documentation (also hallucinated) • Priority support (responses may be wrong but fast) 💬 Testimonials: "I wore these to my performance review. According to the glasses, I'm the CEO now. HR was confused when I tried to give myself a raise." — Dr. Misidentified "Perfect for understanding my ML model's behavior. Now I feel its pain. We're bonding." — ML Engineer "I thought my wife's name was Margaret for three hours. We've been married for 12 years. Her name is Sarah. These glasses are too powerful." — Regretful Husband "Finally, I can experience what it's like to be confidently wrong about everything without any consequences! Wait, there are consequences." — Philosophy Major "I looked at my bank account through these. According to the glasses, I'm a billionaire. The dopamine hit was worth the eventual disappointment." — Temporarily Happy User Return Policy: Full refund if you can prove anything is real (you can't). Legal Notice: ConfidentlyWrong Tech makes no claims about the accuracy of anything, including this product description. The claims in this description may themselves be hallucinated. Meta-hallucination is a feature. FAQ: Q: Are these actually AR glasses? A: Confidently, yes. Q: Do they really show fake information? A: The glasses say "definitely." Q: Is this product real? A: According to the glasses, absolutely. According to reality, we'll never tell. Q: Should I trust anything I see with these? A: Should you trust anything you see from AI? We're making a point here. "See The World Through Confidently Wrong Eyes."™ Battery not included. Reality not included. Truth not included. Existential dread included free.