Public Cloud
ComputeContainersStorageDatabasesAIAnalyticsSecurityDeveloperQuantum

Public cloud — Containers & Orchestration

Hosted Kubernetes, without the operations burden.

intSignal hosts and operates the Kubernetes platform — and runs the same managed services on AWS, Azure, and GCP when your workloads already live there. One operator, one operating model, every layer.

Schedule consultation  ⟶View Storage Options

Hosted platform

Kubernetes operated on intSignal infrastructure.

Multi-cloud capable

Same operating model on AWS, Azure, and GCP.

Hardened by default

CIS Benchmark on every cluster we operate.

24/7 operations

One on-call team for the whole stack.

PRODUCT/01

Container Platform

End-to-end managed platform on intSignal-hosted infrastructure — runtime, orchestration, registry, ingress, and observability — operated as one service.

  • Hosted byintSignal
  • Scopefull stack
  • HardeningCIS Benchmark
terraform· platform.tf

# One module.Whole platform
module "intsignal_platform" {
  source = "./intsignal-platform"
  # example

  name = "prod-platform"
  vendor = "intsignal"
  # or aws, azure, gcp
  region = "us-west"
  k8s_version = "1.30"

  enable_registry = true
  enable_load_balancer = true
  enable_observability = true

  hardening = "cis-benchmark"
}

PRODUCT/02

Managed Kubernetes Service

CNCF-certified Kubernetes distributions on intSignal infrastructure or on AWS EKS, Azure AKS, and GCP GKE. Same operating model on every distribution.

  • Hosted byintSignal or hyperscaler
  • Scopeper cluster
  • HardeningCIS Benchmark
kubectl · cluster-info

$ kubectl get nodes -o wide
NAME                  STATUS   ROLES    VERSION
node-01.intsignal     Ready    worker   v1.30.4
node-02.intsignal     Ready    worker   v1.30.4
node-03.intsignal     Ready    worker   v1.30.4

$ kubectl get pods -A | head
NAMESPACE     NAME                  READY   STATUS
kube-system   coredns-7d4...        1/1     Running
kube-system   cilium-agent-...      1/1     Running
intsignal     monitoring-...        2/2     Running
intsignal     log-forwarder-...     1/1     Running

PRODUCT/03

Managed Rancher Service

Multi-cluster control plane across intSignal-hosted and hyperscaler clusters. One RBAC model, one policy engine, one fleet GitOps — regardless of where the cluster runs.

  • Hosted byintSignal
  • Scopecluster fleet
  • HardeningFleet · ArgoCD
yaml · fleet.yaml

# Apply this manifest to every prod cluster
apiVersion: fleet.cattle.io/v1alpha1
kind: ClusterGroup
metadata:
  name: prod-fleet
spec:
  selector:
    matchLabels:
      env: production
---
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: platform-baseline
spec:
  repo: git@github.com:co/platform-fleet
  targets:
    - clusterGroup: prod-fleet

PRODUCT/04

Managed Private Registry

Push, scan, sign with cosign, gate by policy. Clusters verify signatures on pull. Replication across regions, SBOMs, and tamper-evident audit trails.

  • Hosted byintSignal
  • Scopecross-cluster
  • Hardeningcosign · sigstore
shell · push-and-sign.sh

# Push, scan, sign, gate — all in one
$ docker build -t registry.intsignal.io/app:v1.4.2 .
$ docker push registry.intsignal.io/app:v1.4.2

→ scan: 0 critical, 2 medium
→ scan passed

$ cosign sign --key intsignal-kms://... \
    registry.intsignal.io/app:v1.4.2

→ signature uploaded
→ image promoted to prod tag

$ cosign verify --key intsignal-kms://... \
    registry.intsignal.io/app:v1.4.2
Verification for ...:v1.4.2 -- OK

PRODUCT/05

Load Balancer for Kubernetes

L4 and L7 ingress with TLS 1.3, WAF, DDoS protection, and rate limiting. Connection draining for zero-downtime rollouts.

  • Hosted byintSignal or hyperscaler
  • Scopeper cluster
  • HardeningL4 + L7 + WAF
yaml · ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  annotations:
    intsignal.io/waf: "on"
    intsignal.io/rate-limit: "1000rps"
    intsignal.io/drain-seconds: "30"
spec:
  ingressClassName: intsignal-lb
  tls:
    - hosts: [api.example.com]
      secretName: api-tls
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            backend:
              service: { name: api, port: { number: 80 } }

Pick your stack

What does your cluster need to do?

Three quick questions. We'll suggest a reference architecture and the configuration we'd recommend operating it under.

Tell us about the workload

01 · Where should it run?
02 · What kind of workload?
03 · Compliance constraint?
Recommended stack
intSignal-hosted · web/API
Kubernetes on intSignal infrastructure, operated end-to-end.

Multi-cluster · multi-cloud

Govern many clusters as one.

Managed Rancher gives your platform team a single pane across every Kubernetes cluster — whether they run on intSignal infrastructure, on AWS, Azure, GCP, on-premises, or at the edge. RBAC and policy are defined once and enforced everywhere.

Fleet drives GitOps deployment from your Git repository. Drift detection, version skew alerts, and policy violations show up in one console — not in five separate dashboards.

  • One operator console across every cluster
  • Fleet-driven GitOps with audit trail
  • Cross-cluster RBAC, OIDC, and policy enforcement
  • Drift detection and version-skew alerting

Supply chain · provenance

Every image signed. Every pull verified.

The managed registry scans every image on push, signs it with cosign, and gates it by policy before it can be pulled. Clusters refuse unsigned or scan-failed images automatically.

SBOM generation, replication across regions, tamper-evident audit logs, and retention rules ship by default — designed for SOC 2, ISO 27001, and FedRAMP evidence requirements.

  • Vulnerability scanning with policy gates
  • Cosign signing with SBOM attestations
  • Cross-region replication with consistency
  • Cluster-side signature verification
  • Tamper-evident retention for compliance

Capacity · growth

Outgrow your cluster without re-platforming.

When workloads scale beyond their original footprint, the usual answer is a migration — new cluster, new pipelines, new cutover risk. We expand your existing platform horizontally instead: more nodes, more zones, more clusters under the same Rancher fleet — without changing the operating model.

You keep the same registry, the same ingress class, the same RBAC. Your developers don't relearn anything. Your auditors don't reopen evidence.

  • Add nodes, zones, or clusters under existing control plane
  • No re-IP, no new pipelines, no cutover risk
  • Capacity planning included in quarterly review
  • Grow into a new vendor without migrating away from the old one

24/7 platform operations

You ship code. We deliver audit evidence.

Every managed cluster — hosted on intSignal or on a hyperscaler — ships with monitoring, incident response, scheduled upgrade drills, and a monthly evidence pack covering change records, CVE remediation, SLA attainment, and drill outcomes.

CONTROL-PLANE OPERATIONS

Continuous monitoring with documented response.

Cluster control-plane and node health are monitored around the clock by intSignal's on-call rotation. Incidents have a defined response path, a named on-call engineer, and a written post-incident summary.

UPGRADE CADENCE

Quarterly minor versions

Drilled in staging first, scheduled change window in production.

CVE RESPONSE

Documented remediation

Severity-graded SLA on patching disclosed vulnerabilities.

CHANGE MANAGEMENT

Written records

Rollback path, approval, and outcome captured per change.

EVIDENCE PACK

Monthly delivery

Operational artifacts your assessors can sign off on.

What you get every month

  • Change-record log with rollbacks and approvals
  • CVE remediation timeline and patch evidence
  • Upgrade-drill outcomes and version states
  • Access reviews and IAM diff
  • Incident summaries with root cause
  • SLA attainment against contracted terms

Built for regulated workloads

Hardening and operating practices aligned to the frameworks your assessors recognize. intSignal is not the certified entity for most of these — we deliver the controls and evidence that make your audit possible. Where required, we partner with FedRAMP-authorized providers so the integration is seamless.

HARDENING

CIS Benchmark

Kubernetes hardening with documented exceptions.

SOC 2

Aligned to Type II

Controls and evidence cadence ready for audit.

ISO

Aligned to 27001 / 27017

Cloud-services control narratives.

HIPAA

HIPAA-compliant ops

Encryption, access, audit; BAA via partner.

FEDERAL

FedRAMP via partner

Authorized hyperscaler regions integrated seamlessly.

DATACENTER

Compliant facility

Our hosting facility carries its own attestations.

FAQ

Questions platform teams ask before signing.

If yours isn't here, ask in the consultation — we'd rather flag the awkward bits early than discover them in production.

intSignal hosts and operates the Kubernetes platform on our own infrastructure — that's the default. If your workloads already live on AWS, Azure, or GCP, we run the same managed services against EKS, AKS, or GKE. The operating model is identical; only the underlying cloud differs.

You get a single operator for the whole stack — control plane, registry, ingress, observability — under one contract and one on-call. No shared-responsibility seams between the cloud provider and your platform team. Useful when sovereignty, predictable cost, or "one throat to choke" matters more than hyperscaler-native primitives.

Yes. We take over operations of existing EKS, AKS, or GKE clusters — keeping your existing cloud accounts, networks, and IAM in place. Onboarding includes a discovery pass, hardening to the standard baseline, and handover of the on-call rotation.

Yes — managed Rancher gives you one control plane across any combination of intSignal-hosted, AWS, Azure, GCP, on-premises, and edge clusters. Fleet GitOps drives configuration; RBAC and policy are defined once and applied everywhere.

Minor-version upgrades are drilled against your staging clusters first. Each production upgrade has a written change record, documented rollback path, and signed-off change window. Drill outcomes are included in the monthly evidence pack.

intSignal is not itself a FedRAMP-authorized provider. For federal workloads we run the same managed services against a FedRAMP-authorized partner region — typically AWS GovCloud or Azure Government. You get the same operating model, the same on-call team, and the same evidence cadence, on infrastructure that already carries the authorization.

We expand the platform horizontally rather than migrating you off it. More nodes, more zones, or additional clusters under the same Rancher fleet — keeping the same registry, ingress class, RBAC, and audit evidence trail. No re-platforming, no cutover risk, no audit reset. Capacity planning is part of the quarterly review.

GPU node pools are supported on intSignal-hosted clusters and across hyperscalers. We integrate NVIDIA device plugins, time-slicing or MIG partitioning, and GPU-aware autoscaling for training and inference workloads.

Yes. We integrate with GitHub Actions, GitLab CI, Jenkins, Azure DevOps, ArgoCD, Flux, and anything that speaks the Kubernetes API. Registry, ingress, and cluster API are all standard endpoints your pipelines already know how to use.

Stop running Kubernetes. Start shipping on it.

Share your workload profile and compliance constraints. We'll propose a reference architecture — on intSignal infrastructure or on your existing cloud — and walk you through the day-two operating model.

Schedule Consultation   ⟶