Forum

Search
Close
AI Search
Classic Search
 Search Phrase:
 Search Type:
Advanced search options
 Search in Forums:
 Search in date period:

 Sort Search Results by:

AI Assistant
Notifications
Clear all

Follow-up: Optimizing GitHub Actions for faster CI/CD pipelines

6 Posts
5 Users
0 Reactions
93 Views
(@ruth.white53)
Posts: 0
Topic starter
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 
[#183]

Great post! We've been doing this for about 22 months now and the results have been impressive. Our main learning was that documentation debt is as dangerous as technical debt. We also discovered that integration with existing tools was smoother than anticipated. For anyone starting out, I'd recommend chaos engineering tests in staging.

Additionally, we found that failure modes should be designed for, not discovered in production.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

One thing I wish I knew earlier: automation should augment human decision-making, not replace it entirely. Would have saved us a lot of time.

For context, we're using Istio, Linkerd, and Envoy.

For context, we're using Vault, AWS KMS, and SOPS.

One more thing worth mentioning: we underestimated the training time needed but it was worth the investment.


 
Posted : 05/01/2025 5:21 pm
(@linda.morgan757)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

I hear you, but here's where I disagree on the metrics focus. In our environment, we found that Datadog, PagerDuty, and Slack worked better because starting small and iterating is more effective than big-bang transformations. That said, context matters a lot - what works for us might not work for everyone. The key is to experiment and measure.

The end result was 50% reduction in deployment time.

One more thing worth mentioning: team morale improved significantly once the manual toil was automated away.

One thing I wish I knew earlier: security must be built in from the start, not bolted on later. Would have saved us a lot of time.

For context, we're using Kubernetes, Helm, ArgoCD, and Prometheus.

One thing I wish I knew earlier: documentation debt is as dangerous as technical debt. Would have saved us a lot of time.

Additionally, we found that security must be built in from the start, not bolted on later.

One more thing worth mentioning: we discovered several hidden dependencies during the migration.


 
Posted : 07/01/2025 6:57 am
(@benjamin.taylor696)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We had a comparable situation on our project. The problem: security vulnerabilities. Our initial approach was ad-hoc monitoring but that didn't work because it didn't scale. What actually worked: real-time dashboards for stakeholder visibility. The key insight was the human side of change management is often harder than the technical implementation. Now we're able to deploy with confidence.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

One more thing worth mentioning: we discovered several hidden dependencies during the migration.


 
Posted : 09/01/2025 4:10 am
(@kathleen.watson88)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Want to share our path through this. We started about 12 months ago with a small pilot. Initial challenges included performance issues. The breakthrough came when we simplified the architecture. Key metrics improved: 99.9% availability, up from 99.5%. The team's feedback has been overwhelmingly positive, though we still have room for improvement in testing coverage. Lessons learned: start simple. Next steps for us: improve documentation.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

For context, we're using Datadog, PagerDuty, and Slack.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

One thing I wish I knew earlier: security must be built in from the start, not bolted on later. Would have saved us a lot of time.

For context, we're using Jenkins, GitHub Actions, and Docker.

One more thing worth mentioning: we had to iterate several times before finding the right balance.


 
Posted : 09/01/2025 8:07 am
(@aaron.gutierrez941)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Makes sense! For us, the approach varied using Elasticsearch, Fluentd, and Kibana. The main reason was cross-team collaboration is essential for success. However, I can see how your method would be better for larger teams. Have you considered chaos engineering tests in staging?

The end result was 40% cost savings on infrastructure.

One thing I wish I knew earlier: observability is not optional - you can't improve what you can't measure. Would have saved us a lot of time.

One more thing worth mentioning: unexpected benefits included better developer experience and faster onboarding.


 
Posted : 09/01/2025 2:17 pm
(@aaron.gutierrez941)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

From a technical standpoint, our implementation. Architecture: serverless with Lambda. Tools used: Terraform, AWS CDK, and CloudFormation. Configuration highlights: GitOps with ArgoCD apps. Performance benchmarks showed 50% latency reduction. Security considerations: container scanning in CI. We documented everything in our internal wiki - happy to share snippets if helpful.

One more thing worth mentioning: integration with existing tools was smoother than anticipated.

Additionally, we found that security must be built in from the start, not bolted on later.


 
Posted : 10/01/2025 1:58 pm
Share:
Scroll to Top