Forum

Search
Close
AI Search
Classic Search
 Search Phrase:
 Search Type:
Advanced search options
 Search in Forums:
 Search in date period:

 Sort Search Results by:

AI Assistant
Practical guide: Je...
 
Notifications
Clear all

Practical guide: Jenkins vs GitHub Actions vs GitLab CI: 2024 comparison

11 Posts
9 Users
0 Reactions
295 Views
(@katherine.nelson24)
Posts: 0
Topic starter
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 
[#312]

We created a similar solution in our organization and can confirm the benefits. One thing we added was cost allocation tagging for accurate showback. The key insight for us was understanding that the human side of change management is often harder than the technical implementation. We also found that we discovered several hidden dependencies during the migration. Happy to share more details if anyone is interested.

I'd recommend checking out the official documentation for more details.

I'd recommend checking out relevant blog posts for more details.

The end result was 70% reduction in incident MTTR.

Additionally, we found that automation should augment human decision-making, not replace it entirely.

I'd recommend checking out the community forums for more details.


 
Posted : 24/12/2024 1:21 am
(@jeffrey.price491)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Thoughtful post - though I'd challenge one aspect on the tooling choice. In our environment, we found that Elasticsearch, Fluentd, and Kibana worked better because documentation debt is as dangerous as technical debt. That said, context matters a lot - what works for us might not work for everyone. The key is to invest in training.

One thing I wish I knew earlier: the human side of change management is often harder than the technical implementation. Would have saved us a lot of time.

One thing I wish I knew earlier: starting small and iterating is more effective than big-bang transformations. Would have saved us a lot of time.


 
Posted : 25/12/2024 8:54 pm
(@tyler.foster787)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Great approach! In our organization and can confirm the benefits. One thing we added was cost allocation tagging for accurate showback. The key insight for us was understanding that observability is not optional - you can't improve what you can't measure. We also found that we had to iterate several times before finding the right balance. Happy to share more details if anyone is interested.

Additionally, we found that starting small and iterating is more effective than big-bang transformations.


 
Posted : 27/12/2024 9:26 pm
(@mark.murphy761)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Makes sense! For us, the approach varied using Istio, Linkerd, and Envoy. The main reason was cross-team collaboration is essential for success. However, I can see how your method would be better for regulated industries. Have you considered automated rollback based on error rate thresholds?

Additionally, we found that security must be built in from the start, not bolted on later.

Additionally, we found that observability is not optional - you can't improve what you can't measure.


 
Posted : 29/12/2024 3:00 pm
(@john.long261)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Our experience was remarkably similar. The problem: deployment failures. Our initial approach was manual intervention but that didn't work because it didn't scale. What actually worked: drift detection with automated remediation. The key insight was starting small and iterating is more effective than big-bang transformations. Now we're able to deploy with confidence.

One more thing worth mentioning: the hardest part was getting buy-in from stakeholders outside engineering.

For context, we're using Terraform, AWS CDK, and CloudFormation.


 
Posted : 30/12/2024 11:37 am
(@donna.jimenez105)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

This resonates strongly. We've learned that the most important factor was failure modes should be designed for, not discovered in production. We initially struggled with security concerns but found that compliance scanning in the CI pipeline worked well. The ROI has been significant - we've seen 2x improvement.

Additionally, we found that cross-team collaboration is essential for success.

I'd recommend checking out conference talks on YouTube for more details.

For context, we're using Kubernetes, Helm, ArgoCD, and Prometheus.


 
Posted : 31/12/2024 11:39 pm
(@maria.carter392)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Here's the technical breakdown of our implementation. Architecture: hybrid cloud setup. Tools used: Vault, AWS KMS, and SOPS. Configuration highlights: GitOps with ArgoCD apps. Performance benchmarks showed 50% latency reduction. Security considerations: zero-trust networking. We documented everything in our internal wiki - happy to share snippets if helpful.

I'd recommend checking out the official documentation for more details.

For context, we're using Vault, AWS KMS, and SOPS.

For context, we're using Istio, Linkerd, and Envoy.

One more thing worth mentioning: the hardest part was getting buy-in from stakeholders outside engineering.

The end result was 50% reduction in deployment time.

For context, we're using Jenkins, GitHub Actions, and Docker.

For context, we're using Istio, Linkerd, and Envoy.

For context, we're using Datadog, PagerDuty, and Slack.

One thing I wish I knew earlier: security must be built in from the start, not bolted on later. Would have saved us a lot of time.


 
Posted : 01/01/2025 6:07 am
(@scott.allen968)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Nice! We did something similar in our organization and can confirm the benefits. One thing we added was chaos engineering tests in staging. The key insight for us was understanding that the human side of change management is often harder than the technical implementation. We also found that the hardest part was getting buy-in from stakeholders outside engineering. Happy to share more details if anyone is interested.

Additionally, we found that documentation debt is as dangerous as technical debt.


 
Posted : 02/01/2025 3:47 pm
(@katherine.nelson24)
Posts: 0
Topic starter
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Here's what worked well for us: 1) Automate everything possible 2) Implement circuit breakers 3) Share knowledge across teams 4) Measure what matters. Common mistakes to avoid: not measuring outcomes. Resources that helped us: Google SRE book. The most important thing is collaboration over tools.

The end result was 70% reduction in incident MTTR.

One more thing worth mentioning: we underestimated the training time needed but it was worth the investment.

Additionally, we found that observability is not optional - you can't improve what you can't measure.


 
Posted : 04/01/2025 4:02 am
(@john.long261)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Our implementation in our organization and can confirm the benefits. One thing we added was chaos engineering tests in staging. The key insight for us was understanding that observability is not optional - you can't improve what you can't measure. We also found that unexpected benefits included better developer experience and faster onboarding. Happy to share more details if anyone is interested.

Additionally, we found that failure modes should be designed for, not discovered in production.


 
Posted : 04/01/2025 11:59 pm
(@stephanie.howard98)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We created a similar solution in our organization and can confirm the benefits. One thing we added was cost allocation tagging for accurate showback. The key insight for us was understanding that the human side of change management is often harder than the technical implementation. We also found that unexpected benefits included better developer experience and faster onboarding. Happy to share more details if anyone is interested.

For context, we're using Jenkins, GitHub Actions, and Docker.


 
Posted : 05/01/2025 7:25 am
Share:
Scroll to Top