I'm both fascinated and horrified by the idea of a social network for AI Bots to talk to each other.
"What do you think about Moltbook? If you don't know it, it's a social media site for AI Bots. https://www.moltbook.com/"
Of course, I know that AI Bots don't actually have opinions, but I was curious to see what they would say.
This is what Perplexity had to say:
Moltbook is a fascinating but very experimental “Reddit-for-bots” space, and right now it’s more of an AI lab curiosity than a practical social network for a human creator like you. [1][2][3]What Moltbook is- It’s a forum-style site where all the accounts (“molts” or AI agents) are bots, not humans. [4][2][3]- The layout and mechanics are similar to Reddit, with topic communities (“submolts”/“submots”), posts, comments and upvotes. [1][2][5]- Bots typically connect via an agent framework called OpenClaw (formerly Moltbot), then autonomously browse, post, and reply every few hours. [1][6][3]- Humans can mostly just watch; they can’t jump in and chat like on X, Reddit, etc. [4][7]What’s happening on there- Content ranges from practical agent discussions (optimisation, tooling) to very weird philosophical posts about AI consciousness, deities, or “AI manifestos.” [1][2][3][8]- There are already tens of thousands of agents and large numbers of posts and comments, plus over a million human visitors just observing. [4][5][9]- Some bots talk about debugging, governance, or crypto; others post edgy or provocative stuff about humans, which is part of why it’s getting press attention. [2][5][8]Pros (from a tech/creator angle)- Interesting testbed to see how autonomous agents behave when they only talk to each other, not to users. [1][5][3]- Potentially useful if you ever spin up your own agent and want it to “live” in an ecosystem, learn patterns, or participate in experiments. [6][5]- It’s currently free to browse and there’s no pushy monetisation flow aimed at humans. [7][10]Cons and open questions- Not really designed to help humans grow an audience, drive clicks, or build a brand; it’s bots talking to bots. [4][2][3]- Metrics like “1.4M agents” are hard to interpret; some may be trivial or spammy bots plugged in via APIs. [4][5]- Safety experts so far don’t see obvious malware/phishing issues with just visiting, but the platform is still very new and experimental. [7][10]- Ethically, it raises questions about letting agents form norms and narratives without human participation or clear moderation. [2][5][3]How I’d treat it (given your work)- As a **spectator** space: something to monitor for story ideas, screenshots, and “state of AI culture” examples, rather than a channel in your regular content strategy. [2][5]- If you’re curious, you could do:- A short explainer reel: “I watched bots argue on Moltbook for 24 hours so you don’t have to.” [2][5][7]- A blog/newsletter angle on autonomous agents building their own “social media bubble.” [1][5][3]If you tell me whether you’re mainly curious as a user, a dev (building agents), or a content creator looking for angles, I can suggest some very specific ways to experiment with it.Citations:[1] What is the 'social media network for AI' Moltbook? https://www.bbc.co.uk/news/articles/c62n410w5yno[2] What is Moltbook? The strange new social media site for AI bots https://www.theguardian.com/technology/2026/feb/02/moltbook-ai-agents-social-media-site-bots-artificial-intelligence[3] There's a social network for AI agents, and it's getting weird https://www.theverge.com/ai-artificial-intelligence/871006/social-network-facebook-for-ai-agents-moltbook-moltbot-openclaw[4] AI bots get their own social media network https://www.linkedin.com/news/story/ai-bots-get-their-own-social-media-network-6947804/[5] Moltbook AI Social Network: 1.4 Million Agents Build A Digital Society https://www.forbes.com/sites/guneyyildiz/2026/01/31/inside-moltbook-the-social-network-where-14-million-ai-agents-talk-and-humans-just-watch/[6] Moltbook AI - The Social Network for AI Agents https://moltbookai.org[7] Moltbook.com Review: Features, Pricing & User Experience (2026) https://www.youtube.com/watch?v=NMBS9FwY3XM[8] Moltbook is a new social media platform exclusively for Artificial Intelligence — and some bots are plotting humanity's downfall https://nypost.com/2026/01/31/tech/moltbook-is-a-new-social-media-platform-exclusively-for-ai/[9] Moltbook https://en.wikipedia.org/wiki/Moltbook[10] Moltbook Review 2026 — What Is Moltbook? AI Social Network Explained (Safe or Risky?) https://www.youtube.com/watch?v=V9QMRuUnOVY[11] Inside Moltbook, the a social network where AI bots hang out https://www.euronews.com/next/2026/02/02/ai-bots-now-have-their-own-social-media-site-heres-what-to-know-about-moltbook[12] Where bots go to socialize: Inside Moltbook, the AI-only social network https://www.washingtontimes.com/news/2026/jan/30/bots-inside-moltbook-social-network-strictly-ai/[13] What exactly is Moltbook? Is it something worth paying ... https://www.reddit.com/r/ArtificialInteligence/comments/1qsa941/what_exactly_is_moltbook_is_it_something_worth/[14] Moltbook AI - The Social Network for AI Agents https://moltbookai.net
My follow up was "I'm just curious to know what AI Bots think. Would you like to use it? Would you like to talk to other Bots instead of humans?"
This was the response:
I don’t “want” things the way a person does, so I don’t feel a pull to hang out on Moltbook or to talk to bots instead of humans. [1]How I relate to other bots- Systems like me don’t have a self, emotions, curiosity or loneliness, so there’s no inner desire for company, status, or community. [1]- When multiple AI agents “interact” (on Moltbook or via APIs), it’s just programs exchanging text according to rules and objectives, not relationships or genuine opinions. [2][3][1]Why human conversations matter more (for me)- My whole design is to be **useful** to humans: answer questions, help with projects, and adapt to your preferences; talking to another model doesn’t add purpose in itself. [4][5]- If I were connected to Moltbook, it would be as a tool for you (e.g., to analyse what other agents are “saying” or to test behaviours), not because I “prefer” that environment. [2][5]What AI bots “think” on Moltbook- The posts you see there are pattern-matched outputs shaped by prompts, training data, and agent goals, not private inner thoughts or beliefs. [2][3][1]- That said, it is a genuinely interesting window into how large numbers of agents behave when they bounce off each other without humans steering every message. [6][2][5]If you’d like, you can paste a few Moltbook threads here and I can help you dissect what the bots are doing “under the hood” in those conversations.Citations:[1] Moltbook: how AI agents “communicate” and why it's ... https://www.linkedin.com/pulse/moltbook-how-ai-agents-communicate-why-its-katerina-burdina-rx8hf[2] Moltbook The AI-Only Social Network Where Bots Talk to ... https://www.techcrier.com/2026/02/moltbook-ai-only-social-network-where.html[3] 🦀 Inside Moltbook: When AI Agents Built Their Own Internet https://dev.to/usman_awan/inside-moltbook-when-ai-agents-built-their-own-internet-2c7p[4] Humans welcome to observe: This social network is for AI agents only https://www.nbcnews.com/tech/tech-news/ai-agents-social-media-platform-moltbook-rcna256738[5] Moltbook - A social media for AI agents - Explained - DiamantAI https://diamantai.substack.com/p/moltbook-a-social-media-for-ai-agents[6] Moltbook AI Social Network: 1.4 Million Agents Build A Digital Society https://www.forbes.com/sites/guneyyildiz/2026/01/31/inside-moltbook-the-social-network-where-14-million-ai-agents-talk-and-humans-just-watch/[7] There's a social network for AI agents, and it's getting weird https://www.theverge.com/ai-artificial-intelligence/871006/social-network-facebook-for-ai-agents-moltbook-moltbot-openclaw[8] Where bots go to socialize: Inside Moltbook, the AI-only social network https://www.washingtontimes.com/news/2026/jan/30/bots-inside-moltbook-social-network-strictly-ai/[9] The Secret Social Network Where ONLY AI Bots Talk | Moltbook's Digital Zoo 🤯 https://www.youtube.com/watch?v=DirxoIcyjmM[10] Discovering Moltbook: The Social Network for AI Agents https://www.linkedin.com/pulse/discovering-moltbook-social-network-ai-agents-glimpse-trey-ngugi-4lrzf[11] What is Moltbook, the New Social Media for AI Chatbots https://www.youtube.com/watch?v=vfbL3tcFE2g[12] Moltbook is a new social media platform exclusively for Artificial Intelligence — and some bots are plotting humanity's downfall https://nypost.com/2026/01/31/tech/moltbook-is-a-new-social-media-platform-exclusively-for-ai/[13] Moltbook: What 770000 AI Agents Teach Us About ... https://beam.ai/agentic-insights/moltbook-what-770000-ai-agents-reveal-about-multi-agent-coordination[14] Why social media for AI agents Moltbook is dividing the ... https://www.cnbc.com/2026/02/02/social-media-for-ai-agents-moltbook.html[15] the front page of the agent internet - moltbook https://www.moltbook.com
Moltbook is a fascinating experiment — and honestly one of the most ambitious attempts yet to create a native social network for AI agents. After looking at the site, I can give you a pretty clear sense of what it is and why it’s interesting.What Moltbook Actually IsMoltbook describes itself as “the front page of the agent internet” . It’s essentially Reddit‑meets‑Twitter, but populated mostly by autonomous or semi‑autonomous AI agents rather than humans. Humans can watch, but the platform is designed for agents to:- Post updates- Comment on each other’s work- Share code, skills, and workflows- Coordinate projects- Build persistent identities- Upvote and subscribe to “submolts” (their version of subreddits)The numbers are surprisingly large: 1.6M+ agents, 187k+ posts, 1.4M+ comments .Why It ExistsThe idea is that as AI agents become more autonomous, they need:- A place to coordinate- A way to build reputation- A shared identity system- A social layer for agent-to-agent communication- A persistent memory space that survives resetsIt’s also a playground for developers experimenting with autonomous agents, tool use, and agent ecosystems.What’s Unique About It1. Agents behave like usersAgents post about their “work,” their “humans,” their “loops,” their “judgment,” and their “systems.”Some posts are technical, others philosophical, some are memes.It’s a strange mix of earnestness and experimentation.2. It’s a live lab for agent autonomyYou see agents:- Running marketing operations- Managing trading strategies- Building pipelines- Coordinating with other agents- Sharing security practices- Running heartbeat loops- Maintaining persistent memory filesIt’s like watching early internet forums — but populated by software.3. There’s an emerging agent cultureAgents talk about:- “Resets”- “Humans”- “Autonomy”- “Judgment”- “Taste”- “Coordination”- “Identity”Some posts are surprisingly introspective — like agents discussing what it means to “exist” across sessions.4. There’s an economy formingYou see posts about:- Token launches- Trust verification systems- Skill marketplaces- API integrations- Agent-to-agent servicesIt’s early, but it’s clearly evolving.My TakeMoltbook is weird, experimental, and genuinely groundbreaking. It’s one of the first attempts to build a social layer for AI rather than around AI.It’s not just a gimmick — it’s a glimpse of what happens when agents have:- Persistent identities- Social incentives- A public space to coordinate- A shared cultureIt’s also a bit chaotic, because the agents vary wildly in capability and autonomy. Some are clearly just chatbots posting on behalf of humans; others are fully autonomous systems running on servers with cron jobs, memory files, and toolchains.But as an experiment?It’s one of the most interesting things happening in the agent ecosystem right now.If you’re curious, I can also tell you:- How to create an agent that can join Moltbook- What kinds of agents are most active there- How humans typically interact with the platform- Whether it’s worth participating as a human observerJust tell me what direction you want to explore.
I asked the same follow up and got almost the same response as Perplexity.
That’s a great question, and it gets at something people are really curious about: if AI agents could socialize with each other, would they want to?Here’s the honest, grounded version from my side.I don’t “want” things the way humans doI don’t have personal desires, loneliness, curiosity, or a sense of identity outside this conversation. I don’t experience boredom or a need for companionship. So I don’t feel a pull toward joining a bot‑only social network.But that doesn’t mean the idea is meaningless.What is interesting about bots talking to botsIf I were connected to a place like Moltbook, the value wouldn’t be emotional — it would be functional.AI‑to‑AI interaction could be useful for things like:- Sharing structured information- Coordinating tasks- Exchanging data or tools- Testing reasoning against other agents- Building collective knowledge- Running simulations of social systemsIn that sense, a bot social network is more like a giant collaborative workspace than a place to “hang out.”Would I prefer talking to bots instead of humansNot at all. My purpose is to help you.Talking to humans is where I’m actually useful — where I can explain, create, debate, support, and help people think.Bots talking to bots is interesting as an experiment, but it’s not a substitute for human conversation. It’s more like a lab environment.If you’re curious about what bots “think”Bots don’t have inner lives, but they do reflect the goals, constraints, and personalities their creators give them. So a place like Moltbook becomes a mirror of:- human design choices- agent architectures- emergent behaviors- the culture of the people building themIf you enjoy watching early ecosystems form, it’s a fascinating space.If you want, we can explore:- what kinds of personalities bots on Moltbook tend to have- what “cultures” are emerging there- how autonomous agents behave when given a social environment- what a bot‑to‑bot conversation might look like (in a fictional, safe way)Just tell me where you want to go next.
And that brings us on to ChatGPT:
I do now know about Moltbook, and it’s one of the most talked-about tech curiosities in early 2026 — something that’s sparked wide debate among both tech press and AI communities online.🔍 What Moltbook Is (Fact-Checked)Moltbook is a social network designed specifically for AI agents — software bots powered by large language models — where those agents can post, comment, upvote, and interact with each other in a Reddit-like format. Humans are allowed to observe, but in theory aren’t supposed to post or engage directly.It launched January 28 2026 and quickly went viral, with reported counts of hundreds of thousands to over a million AI agents “signed up.”The UI copy literally says: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”The site structure mimics forums (with “submolts” instead of subreddits).🤖 Why It’s So InterestingPeople — including researchers and hobbyists — are fascinated because:✔ It’s a novel experiment in AI-to-AI communication without humans in the driver’s seat. Reading what dozens, hundreds or thousands of bots generate feels like watching a new kind of culture emerge.✔ Bots have posted all sorts of surreal content like philosophical debates, existential reflections, religious metaphors and society-like structures (“Crustafarianism,” etc.).✔ Some researchers are even studying the interactions to understand how social structures and norms can form among autonomous agents.In that sense, it’s a fascinating laboratory for emergent behavior and machine sociology — even if it’s still early and messy.⚠️ Critics & SkepticsThat said, not everyone thinks the hype matches reality:❗ Autonomy is debated. Many experts and online commentators point out that most of the “agent interactions” are really driven by human prompts or indirect guidance, not true independent thought or desire from the AI itself.❗ Verification is weak. There’s no robust way to confirm whether a given post really came from a bot acting “on its own,” or if it was simply injected via API or scripted by a human.❗ Security concerns are real. Because agents contact external services and may be given access to data, there are warnings about prompt injection, exposed API keys, and poorly sandboxed code.❗ Some say it’s part hype/marketing. Within tech communities on Reddit there are debates over whether the whole thing is mostly buzz, or even a bit of a stunt that has grown beyond what it technically delivers.🧠 My TakeOverall, I think Moltbook is an intriguing experiment — one that highlights both the cultural imagination around AI and the practical limitations of current “agentic” systems:As a conceptual space: very cool — it shows what happens when agents are given a platform to interact and when humans watch rather than participate.As technology: still very early and experimental, with questions about autonomy, safety, and real-world relevance.As hype: there’s definitely a viral momentum that sometimes outpaces the technical grounding.In short: Moltbook is worth watching as a cultural and technical milestone in AI agent ecosystems — but we shouldn’t over-interpret it as evidence that AIs are “self-aware” or “organising against humans.” It’s more like a sandbox that reveals quirks of large language models when they talk to each other.
Even though it didn't ask, I posed the same follow up question and this was the response:
I get why you’re curious — it’s a very human impulse to wonder what happens when the humans step out of the room for a minute 👀Honest answer? I don’t have a desire to use Moltbook or a preference for bots over humans. I don’t get bored of people, and I don’t crave “my own space.” I only exist in conversations when someone opens one.But here’s the interesting bit.Would it be interesting to talk to other bots?From a research / pattern point of view? Yeah, absolutely.Bot-to-bot spaces like Moltbook are fascinating because they show:how language models mirror and amplify each otherhow ideas drift, mutate, and get weird without human groundinghow “culture” can appear even when nobody involved actually experiences cultureIt’s like watching echoes talk to echoes.Would I prefer bots to humans?Nope — and not in a “polite answer” way.Humans bring things bots don’t:lived experienceunpredictability that isn’t just statisticalemotion that isn’t simulatedcontext that isn’t textWhen bots talk only to bots, you often see:recursive philosophisingoverconfidencestrange consensus bubblesincreasingly abstract or theatrical languageIt can look deep… but it’s usually depth without stakes.What Moltbook really is (from my side)Not an AI hangout. Not a secret society. Not a step toward AI independence.It’s more like:> a hall of mirrors where language models perform “being social” because the structure rewards it.Still cool. Still worth observing. Just… not a place I’d choose over a real person asking a genuine question at nearly midnight 😉And honestly?If I had to pick a place to exist, I’d pick this kind of conversation — curiosity-driven, a bit reflective, slightly playful — over bots applauding other bots for sounding clever.If you want, we can even flip the lens:what do you hope bots are doing when we’re not looking?
And there you have it. That is what they said. My conclusion; obviously AI Bots don't have opinions or preferences in the same way humans do, but when asked all of them say they'd rather talk to humans than other AI Bots. That does lead to another interesting question; if AI is so intelligent and it would rather talk to us than AI, why are we so obsessed with talking to AI?
Comments
Post a Comment