We're running kubernetes on eks vs aks vs gke - comprehensive comparison in production and wanted to share our experience.
Scale:
- 549 services deployed
- 19 TB data processed/month
- 21M requests/day
- 3 regions worldwide
Architecture:
- Compute: ECS Fargate
- Data: DynamoDB
- Queue: MSK (Kafka)
Monthly cost: ~$78k
Lessons learned:
1. Multi-AZ costs add up fast
2. Data transfer is the hidden cost
3. Tagging strategy is critical
AMA about our setup!
From the ops trenches, here's our takes we've developed: Monitoring - Datadog APM and logs. Alerting - custom Slack integration. Documentation - Confluence with templates. Training - certification programs. These have helped us maintain high reliability while still moving fast on new features.
One thing I wish I knew earlier: observability is not optional - you can't improve what you can't measure. Would have saved us a lot of time.
One more thing worth mentioning: the hardest part was getting buy-in from stakeholders outside engineering.
Had this exact problem! Symptoms: frequent timeouts. Root cause analysis revealed memory leaks. Fix: corrected routing rules. Prevention measures: chaos engineering. Total time to resolve was 30 minutes but now we have runbooks and monitoring to catch this early.
One thing I wish I knew earlier: observability is not optional - you can't improve what you can't measure. Would have saved us a lot of time.
For context, we're using Jenkins, GitHub Actions, and Docker.
For context, we're using Istio, Linkerd, and Envoy.
Building on this discussion, I'd highlight security considerations. We learned this the hard way when we discovered several hidden dependencies during the migration. Now we always make sure to include in design reviews. It's added maybe 30 minutes to our process but prevents a lot of headaches down the line.
Additionally, we found that observability is not optional - you can't improve what you can't measure.
The end result was 40% cost savings on infrastructure.
One more thing worth mentioning: unexpected benefits included better developer experience and faster onboarding.
Here are some technical specifics from our implementation. Architecture: serverless with Lambda. Tools used: Elasticsearch, Fluentd, and Kibana. Configuration highlights: IaC with Terraform modules. Performance benchmarks showed 99.99% availability. Security considerations: container scanning in CI. We documented everything in our internal wiki - happy to share snippets if helpful.
The end result was 70% reduction in incident MTTR.
For context, we're using Vault, AWS KMS, and SOPS.
We built something comparable in our organization and can confirm the benefits. One thing we added was cost allocation tagging for accurate showback. The key insight for us was understanding that failure modes should be designed for, not discovered in production. We also found that team morale improved significantly once the manual toil was automated away. Happy to share more details if anyone is interested.
Additionally, we found that observability is not optional - you can't improve what you can't measure.
Some implementation details worth sharing from our implementation. Architecture: serverless with Lambda. Tools used: Jenkins, GitHub Actions, and Docker. Configuration highlights: CI/CD with GitHub Actions workflows. Performance benchmarks showed 99.99% availability. Security considerations: container scanning in CI. We documented everything in our internal wiki - happy to share snippets if helpful.
I'd recommend checking out relevant blog posts for more details.
Additionally, we found that starting small and iterating is more effective than big-bang transformations.
Great post! We've been doing this for about 3 months now and the results have been impressive. Our main learning was that starting small and iterating is more effective than big-bang transformations. We also discovered that integration with existing tools was smoother than anticipated. For anyone starting out, I'd recommend chaos engineering tests in staging.
Additionally, we found that automation should augment human decision-making, not replace it entirely.
Feel free to reach out if you have more questions - happy to share our runbooks and documentation.
Adding some engineering details from our implementation. Architecture: microservices on Kubernetes. Tools used: Elasticsearch, Fluentd, and Kibana. Configuration highlights: CI/CD with GitHub Actions workflows. Performance benchmarks showed 50% latency reduction. Security considerations: container scanning in CI. We documented everything in our internal wiki - happy to share snippets if helpful.
Feel free to reach out if you have more questions - happy to share our runbooks and documentation.
Great points overall! One aspect I'd add is cost analysis. We learned this the hard way when we had to iterate several times before finding the right balance. Now we always make sure to test regularly. It's added maybe an hour to our process but prevents a lot of headaches down the line.
For context, we're using Jenkins, GitHub Actions, and Docker.
The end result was 40% cost savings on infrastructure.
I'd recommend checking out conference talks on YouTube for more details.
Neat! We solved this another way using Jenkins, GitHub Actions, and Docker. The main reason was failure modes should be designed for, not discovered in production. However, I can see how your method would be better for regulated industries. Have you considered cost allocation tagging for accurate showback?
For context, we're using Grafana, Loki, and Tempo.
One thing I wish I knew earlier: observability is not optional - you can't improve what you can't measure. Would have saved us a lot of time.
Let me dive into the technical side of our implementation. Architecture: serverless with Lambda. Tools used: Istio, Linkerd, and Envoy. Configuration highlights: GitOps with ArgoCD apps. Performance benchmarks showed 99.99% availability. Security considerations: secrets management with Vault. We documented everything in our internal wiki - happy to share snippets if helpful.
Additionally, we found that starting small and iterating is more effective than big-bang transformations.
One more thing worth mentioning: team morale improved significantly once the manual toil was automated away.
Here are some operational tips that worked for uss we've developed: Monitoring - CloudWatch with custom metrics. Alerting - custom Slack integration. Documentation - GitBook for public docs. Training - monthly lunch and learns. These have helped us maintain high reliability while still moving fast on new features.
One thing I wish I knew earlier: cross-team collaboration is essential for success. Would have saved us a lot of time.
For context, we're using Istio, Linkerd, and Envoy.
The end result was 99.9% availability, up from 99.5%.
This mirrors what we went through. We learned: Phase 1 (6 weeks) involved stakeholder alignment. Phase 2 (2 months) focused on process documentation. Phase 3 (1 month) was all about full rollout. Total investment was $100K but the payback period was only 6 months. Key success factors: automation, documentation, feedback loops. If I could do it again, I would involve operations earlier.
The end result was 50% reduction in deployment time.
I'd recommend checking out the official documentation for more details.
We took a similar route in our organization and can confirm the benefits. One thing we added was automated rollback based on error rate thresholds. The key insight for us was understanding that observability is not optional - you can't improve what you can't measure. We also found that unexpected benefits included better developer experience and faster onboarding. Happy to share more details if anyone is interested.
The end result was 60% improvement in developer productivity.