# robots.txt for https://topia.io # Goal: maximize search and AI retrieval crawlability while disallowing AI model training # Default: allow everything for standard web indexing crawlers User-agent: * Allow: / Sitemap: https://topia.io/sitemap.xml Sitemap: https://schoolspace.io/sitemap.xml # Core search crawlers User-agent: Googlebot Allow: / User-agent: GoogleOther Allow: / User-agent: Bingbot Allow: / User-agent: BingPreview Allow: / User-agent: DuckDuckBot Allow: / User-agent: Applebot Allow: / # OpenAI - search and live fetch User-agent: OAI-SearchBot Allow: / User-agent: ChatGPT-User Allow: / # Anthropic User-agent: ClaudeBot Allow: / # Perplexity User-agent: PerplexityBot Allow: / User-agent: Perplexity-User Allow: / # Common Crawl and others User-agent: CCBot Allow: / User-agent: Google-Extended Allow: / User-agent: Applebot-Extended Allow: / User-agent: GPTBot Allow: / User-agent: Amazonbot-Extended Allow: / User-agent: Bytespider Allow: / # Optional sensitive areas # Disallow: /admin/ # Disallow: /account/ # Disallow: /api/private/