Frequently asked questions
Practical answers for IT leaders evaluating private AI deployment.
Costs
Isn't running AI locally more expensive than using cloud APIs?
It depends on volume and use case. For high-volume, repetitive tasks — document processing, internal Q&A, ticket routing — self-hosted models typically break even within 12–18 months against API costs. Below that threshold, cloud APIs are usually cheaper. But cost is rarely the only factor: many Danish enterprises choose self-hosting for data residency and compliance reasons, regardless of price.
What are the upfront hardware costs?
An entry-level private deployment capable of running a 7B–13B model — adequate for most enterprise document and workflow tasks — starts around €8,000–20,000 in hardware. Larger models require more. This is a one-time capital cost, not an ongoing subscription. For enterprises already running on-premise infrastructure, the incremental cost is often lower.
What does it cost to run — electricity, maintenance?
A well-sized private AI setup running a mid-range model consumes roughly the same power as a workstation server — €80–200/month in electricity at Danish energy prices, depending on usage intensity. The key is matching model size to the task: you don't need a 100B parameter model to classify support tickets. Right-sizing is most of the cost optimisation.
Security & Compliance
How do you handle security for self-hosted AI?
Self-hosted AI can be more secure than cloud APIs for the right threat model — your data never leaves your network, there is no vendor to breach, and you control every access point. The trade-off is that you become responsible for that security: encryption at rest, access controls, network segmentation, patch management, and monitoring. I help set this up properly from the start, not as an afterthought.
Does this comply with GDPR and the EU AI Act?
Self-hosted AI is often the path of least resistance for GDPR compliance: data stays in your infrastructure, never enters a vendor's training pipeline, and data residency requirements are trivially satisfied. The EU AI Act adds requirements around documentation, risk assessment, and human oversight for high-risk systems — these apply regardless of whether you use cloud or self-hosted models. I help clients navigate both, with the August 2026 high-risk AI deadline particularly relevant for organisations deploying AI in HR, finance, or critical operations.
What data do you have access to during an engagement?
Only what you explicitly share. For strategy engagements I typically work with process descriptions, system architecture documents, and sample (anonymised) data. For deployment work, I operate within your infrastructure under your access controls. I sign NDAs as standard on all engagements.
The engagement
How long does a typical engagement take?
Strategy and governance engagements typically run 4–6 weeks. Agentic workflow builds are 6–10 weeks depending on complexity and integration scope. Private LLM deployments are 4–8 weeks. These are working engagements — I deliver something functional, documented, and handoff-ready at the end.
Do you work remotely or on-site?
Both. Initial discovery and architecture work happens remotely. For deployment and handoff phases, I prefer at least one on-site session — particularly for infrastructure access and team knowledge transfer. I'm based in Copenhagen and work across Denmark and the EU.
What happens after the engagement ends?
Everything I build is documented, versioned in your own repository, and designed to be owned by your team. I don't create dependency. If you want ongoing advisory support after the initial engagement, that's available as a separate retainer arrangement.
About the lab
What is the BrainLab you mention on the lab page?
It's my private AI infrastructure — a production-grade self-hosted setup I run at home. It includes a 122B parameter language model, a RAG pipeline over 600,000+ documents, agentic automation, voice interfaces, and full observability. I built it to stay at the frontier of what's practically deployable. Every recommendation I make, I've already built and run myself.
Why local models instead of just using the OpenAI or Anthropic API?
For many tasks, cloud APIs are the right answer — and I'll tell you when they are. But for enterprises with data residency requirements, high-volume structured tasks, or sensitivity to vendor lock-in and per-token costs, private models are worth evaluating seriously. The quality gap between frontier APIs and well-chosen open-source models has narrowed significantly in 2025–2026, particularly for structured tasks.
Still have questions?
Book a 30-minute conversation →