Public cloud — Containers & Orchestration
intSignal hosts and operates the Kubernetes platform — and runs the same managed services on AWS, Azure, and GCP when your workloads already live there. One operator, one operating model, every layer.
Kubernetes operated on intSignal infrastructure.
Same operating model on AWS, Azure, and GCP.
CIS Benchmark on every cluster we operate.
One on-call team for the whole stack.
PRODUCT/01
End-to-end managed platform on intSignal-hosted infrastructure — runtime, orchestration, registry, ingress, and observability — operated as one service.
terraform· platform.tf
# One module.Whole platform
module "intsignal_platform" {
source = "./intsignal-platform"
# example
name = "prod-platform"
vendor = "intsignal"
# or aws, azure, gcp
region = "us-west"
k8s_version = "1.30"
enable_registry = true
enable_load_balancer = true
enable_observability = true
hardening = "cis-benchmark"
}PRODUCT/02
CNCF-certified Kubernetes distributions on intSignal infrastructure or on AWS EKS, Azure AKS, and GCP GKE. Same operating model on every distribution.
kubectl · cluster-info
$ kubectl get nodes -o wide
NAME STATUS ROLES VERSION
node-01.intsignal Ready worker v1.30.4
node-02.intsignal Ready worker v1.30.4
node-03.intsignal Ready worker v1.30.4
$ kubectl get pods -A | head
NAMESPACE NAME READY STATUS
kube-system coredns-7d4... 1/1 Running
kube-system cilium-agent-... 1/1 Running
intsignal monitoring-... 2/2 Running
intsignal log-forwarder-... 1/1 RunningPRODUCT/03
Multi-cluster control plane across intSignal-hosted and hyperscaler clusters. One RBAC model, one policy engine, one fleet GitOps — regardless of where the cluster runs.
yaml · fleet.yaml
# Apply this manifest to every prod cluster
apiVersion: fleet.cattle.io/v1alpha1
kind: ClusterGroup
metadata:
name: prod-fleet
spec:
selector:
matchLabels:
env: production
---
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: platform-baseline
spec:
repo: git@github.com:co/platform-fleet
targets:
- clusterGroup: prod-fleetPRODUCT/04
Push, scan, sign with cosign, gate by policy. Clusters verify signatures on pull. Replication across regions, SBOMs, and tamper-evident audit trails.
shell · push-and-sign.sh
# Push, scan, sign, gate — all in one
$ docker build -t registry.intsignal.io/app:v1.4.2 .
$ docker push registry.intsignal.io/app:v1.4.2
→ scan: 0 critical, 2 medium
→ scan passed
$ cosign sign --key intsignal-kms://... \
registry.intsignal.io/app:v1.4.2
→ signature uploaded
→ image promoted to prod tag
$ cosign verify --key intsignal-kms://... \
registry.intsignal.io/app:v1.4.2
Verification for ...:v1.4.2 -- OKPRODUCT/05
L4 and L7 ingress with TLS 1.3, WAF, DDoS protection, and rate limiting. Connection draining for zero-downtime rollouts.
yaml · ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
intsignal.io/waf: "on"
intsignal.io/rate-limit: "1000rps"
intsignal.io/drain-seconds: "30"
spec:
ingressClassName: intsignal-lb
tls:
- hosts: [api.example.com]
secretName: api-tls
rules:
- host: api.example.com
http:
paths:
- path: /
backend:
service: { name: api, port: { number: 80 } }Pick your stack
Three quick questions. We'll suggest a reference architecture and the configuration we'd recommend operating it under.
Multi-cluster · multi-cloud
Managed Rancher gives your platform team a single pane across every Kubernetes cluster — whether they run on intSignal infrastructure, on AWS, Azure, GCP, on-premises, or at the edge. RBAC and policy are defined once and enforced everywhere.
Fleet drives GitOps deployment from your Git repository. Drift detection, version skew alerts, and policy violations show up in one console — not in five separate dashboards.
Supply chain · provenance
The managed registry scans every image on push, signs it with cosign, and gates it by policy before it can be pulled. Clusters refuse unsigned or scan-failed images automatically.
SBOM generation, replication across regions, tamper-evident audit logs, and retention rules ship by default — designed for SOC 2, ISO 27001, and FedRAMP evidence requirements.
Capacity · growth
When workloads scale beyond their original footprint, the usual answer is a migration — new cluster, new pipelines, new cutover risk. We expand your existing platform horizontally instead: more nodes, more zones, more clusters under the same Rancher fleet — without changing the operating model.
You keep the same registry, the same ingress class, the same RBAC. Your developers don't relearn anything. Your auditors don't reopen evidence.
24/7 platform operations
Every managed cluster — hosted on intSignal or on a hyperscaler — ships with monitoring, incident response, scheduled upgrade drills, and a monthly evidence pack covering change records, CVE remediation, SLA attainment, and drill outcomes.
CONTROL-PLANE OPERATIONS
Cluster control-plane and node health are monitored around the clock by intSignal's on-call rotation. Incidents have a defined response path, a named on-call engineer, and a written post-incident summary.
UPGRADE CADENCE
Drilled in staging first, scheduled change window in production.
CVE RESPONSE
Severity-graded SLA on patching disclosed vulnerabilities.
CHANGE MANAGEMENT
Rollback path, approval, and outcome captured per change.
EVIDENCE PACK
Operational artifacts your assessors can sign off on.
Hardening and operating practices aligned to the frameworks your assessors recognize. intSignal is not the certified entity for most of these — we deliver the controls and evidence that make your audit possible. Where required, we partner with FedRAMP-authorized providers so the integration is seamless.
Kubernetes hardening with documented exceptions.
Controls and evidence cadence ready for audit.
Cloud-services control narratives.
Encryption, access, audit; BAA via partner.
Authorized hyperscaler regions integrated seamlessly.
Our hosting facility carries its own attestations.
FAQ
If yours isn't here, ask in the consultation — we'd rather flag the awkward bits early than discover them in production.
intSignal hosts and operates the Kubernetes platform on our own infrastructure — that's the default. If your workloads already live on AWS, Azure, or GCP, we run the same managed services against EKS, AKS, or GKE. The operating model is identical; only the underlying cloud differs.
You get a single operator for the whole stack — control plane, registry, ingress, observability — under one contract and one on-call. No shared-responsibility seams between the cloud provider and your platform team. Useful when sovereignty, predictable cost, or "one throat to choke" matters more than hyperscaler-native primitives.
Yes. We take over operations of existing EKS, AKS, or GKE clusters — keeping your existing cloud accounts, networks, and IAM in place. Onboarding includes a discovery pass, hardening to the standard baseline, and handover of the on-call rotation.
Yes — managed Rancher gives you one control plane across any combination of intSignal-hosted, AWS, Azure, GCP, on-premises, and edge clusters. Fleet GitOps drives configuration; RBAC and policy are defined once and applied everywhere.
Minor-version upgrades are drilled against your staging clusters first. Each production upgrade has a written change record, documented rollback path, and signed-off change window. Drill outcomes are included in the monthly evidence pack.
intSignal is not itself a FedRAMP-authorized provider. For federal workloads we run the same managed services against a FedRAMP-authorized partner region — typically AWS GovCloud or Azure Government. You get the same operating model, the same on-call team, and the same evidence cadence, on infrastructure that already carries the authorization.
We expand the platform horizontally rather than migrating you off it. More nodes, more zones, or additional clusters under the same Rancher fleet — keeping the same registry, ingress class, RBAC, and audit evidence trail. No re-platforming, no cutover risk, no audit reset. Capacity planning is part of the quarterly review.
GPU node pools are supported on intSignal-hosted clusters and across hyperscalers. We integrate NVIDIA device plugins, time-slicing or MIG partitioning, and GPU-aware autoscaling for training and inference workloads.
Yes. We integrate with GitHub Actions, GitLab CI, Jenkins, Azure DevOps, ArgoCD, Flux, and anything that speaks the Kubernetes API. Registry, ingress, and cluster API are all standard endpoints your pipelines already know how to use.
Share your workload profile and compliance constraints. We'll propose a reference architecture — on intSignal infrastructure or on your existing cloud — and walk you through the day-two operating model.