We built an AI platform. The least we can do is be honest about how it actually works. No mystification, no "proprietary magic," no marketing-speak about "responsible AI" that's three paragraphs long and says nothing.
The short version: Polpo 8 is an AI orchestration layer. We use enterprise-grade language models provided by OneFirewall.ai to power an 8-arm system that executes content, code, analysis, and automation tasks on behalf of our clients. Human oversight is always part of the loop.
Polpo 8 does not operate its own AI models from scratch. We use large language models (LLMs) and specialist AI systems provided by OneFirewall.ai — a platform built specifically for enterprise deployment with security, sovereignty, and compliance at its core.
We chose OneFirewall.ai because enterprise AI should not mean "your confidential data goes somewhere you can't audit." Their models are designed to be deployed in controlled, secure environments that meet the requirements organizations like financial services firms, cybersecurity companies, and regulated industries actually have.
The "central brain" in Polpo 8 is our orchestration system — the logic layer that takes a mission, breaks it down, routes subtasks to the appropriate arms, manages context between them, synthesizes outputs, and manages quality control. This is where our engineering expertise lives.
Think of it this way: the AI models are the engines. Polpo 8 is the aircraft that puts those engines in the right configuration to actually fly somewhere specific.
Within the Polpo 8 system, AI is used for the following categories of work:
We think it's equally important to be clear about limits:
Every Polpo 8 deployment includes configuration of oversight thresholds — points at which human review is required before an output is published, sent, or acted upon. The level of automation is calibrated to the risk level of the action.
High-stakes outputs (customer-facing communications, published content, automated financial decisions) always have a human checkpoint unless the client has explicitly configured and accepted autonomous operation for that task category.
We believe "AI-operated" doesn't mean "human-absent." It means the humans are focused on the decisions that require human judgment, rather than the execution that doesn't.
Client data processed by Polpo 8 is handled under strict data processing agreements that comply with the General Data Protection Regulation (GDPR) and applicable EU data protection law. Full details are in our Privacy Policy.
We do not share client data across clients. We do not use client data to train AI models without explicit consent. We maintain separation between client environments.
The AI infrastructure provided by OneFirewall.ai offers data residency options for clients with specific sovereignty requirements. If this applies to your organisation, mention it when you contact us.
Polpo 8 produces AI-generated content as part of its service. It is the responsibility of clients to disclose AI-generated content to their end users where required by law, platform terms, or professional ethics. We provide documentation to support this where required.
We do not believe in hiding the fact that AI is involved. Clients who are deploying AI-generated outputs to audiences should be transparent about it — both because it's becoming legally required in more jurisdictions and because, frankly, their audiences probably already assume it anyway.
The AI landscape changes rapidly. We update our model partners, safety practices, and orchestration logic on a continuous basis. When significant changes are made that affect how client data is handled or how outputs are generated, we notify affected clients.
If you have specific requirements, concerns, or questions about how AI is used in Polpo 8, we'd rather answer them directly. Get in touch.
Questions about our AI practices? We welcome scrutiny. The organizations that are going to get this right are the ones willing to be transparent about what they're doing. Contact us and we'll answer directly — not with a FAQ that was written to avoid answering.
The future belongs to organizations that use AI thoughtfully and ambitiously. We're building the tools for both.
Start the Conversation →