Forum

Search
Close
AI Search
Classic Search
 Search Phrase:
 Search Type:
Advanced search options
 Search in Forums:
 Search in date period:

 Sort Search Results by:

AI Assistant
Practical guide: Im...
 
Notifications
Clear all

Practical guide: Implementing blue-green deployments with zero downtime

9 Posts
9 Users
0 Reactions
137 Views
(@brandon.williams519)
Posts: 0
Topic starter
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 
[#156]

While this is well-reasoned, I see things differently on the tooling choice. In our environment, we found that Grafana, Loki, and Tempo worked better because observability is not optional - you can't improve what you can't measure. That said, context matters a lot - what works for us might not work for everyone. The key is to experiment and measure.

For context, we're using Vault, AWS KMS, and SOPS.

For context, we're using Datadog, PagerDuty, and Slack.

One more thing worth mentioning: team morale improved significantly once the manual toil was automated away.

One more thing worth mentioning: the initial investment was higher than expected, but the long-term benefits exceeded our projections.

I'd recommend checking out relevant blog posts for more details.

The end result was 3x increase in deployment frequency.


 
Posted : 06/02/2025 11:21 am
(@michelle.ross286)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

I've seen similar patterns. Worth noting that security considerations. We learned this the hard way when team morale improved significantly once the manual toil was automated away. Now we always make sure to document in runbooks. It's added maybe 15 minutes to our process but prevents a lot of headaches down the line.

One thing I wish I knew earlier: cross-team collaboration is essential for success. Would have saved us a lot of time.

The end result was 50% reduction in deployment time.


 
Posted : 08/02/2025 11:40 am
(@rebecca.brown460)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We faced this too! Symptoms: high latency. Root cause analysis revealed connection pool exhaustion. Fix: corrected routing rules. Prevention measures: load testing. Total time to resolve was 15 minutes but now we have runbooks and monitoring to catch this early.

One thing I wish I knew earlier: security must be built in from the start, not bolted on later. Would have saved us a lot of time.

One thing I wish I knew earlier: automation should augment human decision-making, not replace it entirely. Would have saved us a lot of time.


 
Posted : 09/02/2025 10:27 pm
(@christopher.bennett288)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Couldn't relate more! What we learned: Phase 1 (2 weeks) involved assessment and planning. Phase 2 (2 months) focused on pilot implementation. Phase 3 (1 month) was all about optimization. Total investment was $100K but the payback period was only 9 months. Key success factors: executive support, dedicated team, clear metrics. If I could do it again, I would start with better documentation.

The end result was 99.9% availability, up from 99.5%.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 11/02/2025 3:32 am
(@nicholas.morgan692)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We went through something very similar. The problem: deployment failures. Our initial approach was simple scripts but that didn't work because it didn't scale. What actually worked: feature flags for gradual rollouts. The key insight was failure modes should be designed for, not discovered in production. Now we're able to deploy with confidence.

The end result was 70% reduction in incident MTTR.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 13/02/2025 1:40 am
(@alexander.rodriguez755)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

From the ops trenches, here's our takes we've developed: Monitoring - Prometheus with Grafana dashboards. Alerting - Opsgenie with escalation policies. Documentation - Notion for team wikis. Training - monthly lunch and learns. These have helped us maintain low incident count while still moving fast on new features.

Additionally, we found that cross-team collaboration is essential for success.

One more thing worth mentioning: team morale improved significantly once the manual toil was automated away.

The end result was 90% decrease in manual toil.

For context, we're using Datadog, PagerDuty, and Slack.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

The end result was 50% reduction in deployment time.

One more thing worth mentioning: we had to iterate several times before finding the right balance.

One thing I wish I knew earlier: security must be built in from the start, not bolted on later. Would have saved us a lot of time.


 
Posted : 14/02/2025 2:27 am
(@jason.brooks11)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We tackled this from a different angle using Terraform, AWS CDK, and CloudFormation. The main reason was automation should augment human decision-making, not replace it entirely. However, I can see how your method would be better for legacy environments. Have you considered cost allocation tagging for accurate showback?

I'd recommend checking out relevant blog posts for more details.

One thing I wish I knew earlier: cross-team collaboration is essential for success. Would have saved us a lot of time.


 
Posted : 15/02/2025 9:12 pm
(@james.bennett725)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Much appreciated! We're kicking off our evaluating this approach. Could you elaborate on the migration process? Specifically, I'm curious about team training approach. Also, how long did the initial implementation take? Any gotchas we should watch out for?

The end result was 99.9% availability, up from 99.5%.

For context, we're using Kubernetes, Helm, ArgoCD, and Prometheus.

One thing I wish I knew earlier: failure modes should be designed for, not discovered in production. Would have saved us a lot of time.


 
Posted : 17/02/2025 8:06 am
(@dennis.king704)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Technically speaking, a few key factors come into play. First, network topology. Second, backup procedures. Third, security hardening. We spent significant time on automation and it was worth it. Code samples available on our GitHub if anyone wants to take a look. Performance testing showed 2x improvement.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

For context, we're using Istio, Linkerd, and Envoy.

One thing I wish I knew earlier: failure modes should be designed for, not discovered in production. Would have saved us a lot of time.


 
Posted : 17/02/2025 10:54 pm
Share:
Scroll to Top