Things we built because we could.
Free tools from the Portainer engineering team. Each one solves a real problem we kept running into. None of them are on the product roadmap. All of them are open source.
Platform engineering teams are increasingly running shared Kubernetes clusters, but most application developers still think in Docker. D2K bridges that gap. It presents a synthetic Docker API endpoint that translates incoming Docker CLI commands and API calls into the equivalent Kubernetes operations on a shared cluster... letting developers work exactly as they always have, without touching a kubeconfig.
The result is a clean separation: developers get the Docker workflow they know, platform teams keep full Kubernetes control. No retooling of CI pipelines. No Kubernetes training for every developer on the floor.
Portainer is built for operators. Portainer Run is built for the developers those operators support. It's a self-service developer portal backed by the Portainer API... a Cloud Run-style interface that lets internal developers deploy containers, stream logs, roll back bad images, and diagnose failures without touching Portainer's operator UI or a kubeconfig.
The underlying platform is still Portainer, with all the RBAC and access controls that implies. Portainer Run removes the distance between a developer and their workload. Deploy a container, get a URL, stream logs, inspect revisions, roll back — all from one narrow, purpose-built interface. When something goes wrong, the built-in AI assistant proactively fetches logs, pod conditions, and Kubernetes events across all three levels (Deployment, ReplicaSet, Pod) and triages the failure... including scheduling errors and image pull failures where no application logs exist yet.
portainer-run.html, server.js, Dockerfile. Node.js proxy handles CORS and keeps credentials server-side. Self-signed TLS by default.A full-featured CLI for Portainer Business Edition that covers the complete REST API. Environments, stacks, containers, Kubernetes workloads, edge compute, users, teams, RBAC, registries, GitOps, webhooks, backups, licensing, observability... all of it, from the command line.
If you're scripting Portainer operations, building CI/CD pipelines that interact with Portainer, or just prefer a terminal to a browser, portainerctl gives you full API coverage with consistent JSON, YAML, or table output and multi-context support for managing multiple Portainer instances.
Portainer's web UI works in a mobile browser, but that's not the same as a native app. Krill is a native iOS and Android application for Portainer... built for engineers who are on call, on the factory floor, or just away from a desk when something needs attention.
Connect to any Portainer instance, monitor environment health, inspect containers and stacks, and take action when you need to. The same Portainer API that powers the web UI powers Krill. Everything you can see in the browser, you can see on your phone.
The question was straightforward: why does running a single-node Kubernetes cluster still require so much operational ceremony? K3s, K0s, MicroK8s all work, but none were built with the Portainer operator in mind. KubeSolo is a single-node Kubernetes distribution built specifically for edge, IoT, and resource-constrained environments, and packaged to integrate natively with Portainer from day one.
It ships as a single binary. No kubeadm, no etcd cluster, no control-plane setup. Just Kubernetes on a single node, managed through Portainer. What started as a Skunkworks experiment to answer one question became a standalone project with its own community and a clear roadmap.
We're building more.
If you're using one of these tools, or have a problem you think we should take a look at, we want to hear about it. The best Skunkworks projects start with a real operational problem nobody has properly solved.