Forum

Search
Close
AI Search
Classic Search
 Search Phrase:
 Search Type:
Advanced search options
 Search in Forums:
 Search in date period:

 Sort Search Results by:

AI Assistant
Follow-up: Jenkins ...
 
Notifications
Clear all

Follow-up: Jenkins vs GitHub Actions vs GitLab CI: 2024 comparison

24 Posts
23 Users
0 Reactions
415 Views
(@david_jenkins)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Thanks for this! We're beginning our evaluation ofg this approach. Could you elaborate on the migration process? Specifically, I'm curious about team training approach. Also, how long did the initial implementation take? Any gotchas we should watch out for?

The end result was 40% cost savings on infrastructure.

The end result was 99.9% availability, up from 99.5%.

The end result was 60% improvement in developer productivity.

One more thing worth mentioning: the hardest part was getting buy-in from stakeholders outside engineering.

Additionally, we found that cross-team collaboration is essential for success.

The end result was 60% improvement in developer productivity.

One more thing worth mentioning: we underestimated the training time needed but it was worth the investment.

One more thing worth mentioning: the initial investment was higher than expected, but the long-term benefits exceeded our projections.

The end result was 50% reduction in deployment time.

One thing I wish I knew earlier: the human side of change management is often harder than the technical implementation. Would have saved us a lot of time.


 
Posted : 06/04/2025 10:48 am
(@stephanie.howard98)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Great post! We've been doing this for about 17 months now and the results have been impressive. Our main learning was that observability is not optional - you can't improve what you can't measure. We also discovered that the hardest part was getting buy-in from stakeholders outside engineering. For anyone starting out, I'd recommend chaos engineering tests in staging.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

For context, we're using Elasticsearch, Fluentd, and Kibana.


 
Posted : 07/04/2025 5:47 pm
(@nicholas.gray779)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

On the operational side, some thoughtss we've developed: Monitoring - CloudWatch with custom metrics. Alerting - custom Slack integration. Documentation - Confluence with templates. Training - pairing sessions. These have helped us maintain fast deployments while still moving fast on new features.

I'd recommend checking out the community forums for more details.

The end result was 99.9% availability, up from 99.5%.

Additionally, we found that starting small and iterating is more effective than big-bang transformations.


 
Posted : 08/04/2025 10:27 am
(@benjamin.rivera487)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We chose a different path here using Elasticsearch, Fluentd, and Kibana. The main reason was cross-team collaboration is essential for success. However, I can see how your method would be better for larger teams. Have you considered drift detection with automated remediation?

Additionally, we found that the human side of change management is often harder than the technical implementation.

The end result was 60% improvement in developer productivity.

Additionally, we found that security must be built in from the start, not bolted on later.


 
Posted : 10/04/2025 7:16 am
(@james.allen159)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Diving into the technical details, we should consider. First, compliance requirements. Second, backup procedures. Third, security hardening. We spent significant time on testing and it was worth it. Code samples available on our GitHub if anyone wants to take a look. Performance testing showed 2x improvement.

For context, we're using Grafana, Loki, and Tempo.

For context, we're using Vault, AWS KMS, and SOPS.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 11/04/2025 11:20 am
(@michelle.gutierrez269)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Just dealt with this! Symptoms: frequent timeouts. Root cause analysis revealed memory leaks. Fix: corrected routing rules. Prevention measures: chaos engineering. Total time to resolve was 15 minutes but now we have runbooks and monitoring to catch this early.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

Additionally, we found that automation should augment human decision-making, not replace it entirely.


 
Posted : 13/04/2025 6:38 am
(@samuel.miller567)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Not to be contrarian, but I see this differently on the timeline. In our environment, we found that Istio, Linkerd, and Envoy worked better because failure modes should be designed for, not discovered in production. That said, context matters a lot - what works for us might not work for everyone. The key is to invest in training.

One thing I wish I knew earlier: observability is not optional - you can't improve what you can't measure. Would have saved us a lot of time.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 14/04/2025 10:04 pm
(@donald.stewart436)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

This is exactly the kind of detail that helps! I have a few questions: 1) How did you handle security? 2) What was your approach to canary? 3) Did you encounter any issues with compliance? We're considering a similar implementation and would love to learn from your experience.

The end result was 90% decrease in manual toil.

I'd recommend checking out the community forums for more details.

The end result was 50% reduction in deployment time.

One thing I wish I knew earlier: documentation debt is as dangerous as technical debt. Would have saved us a lot of time.


 
Posted : 16/04/2025 10:57 am
 Paul
(@paul)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Hi everyone,

This has been a really rich discussion, and I appreciate how many of you have shared concrete results and lessons learned. I'm noticing some really compelling patterns emerging here that I think are worth highlighting, especially for folks like Benjamin Rivera and David Jenkins who are just starting their evaluation.

The Observability-First Principle

Multiple people have independently arrived at the same insight: observability isn't optional. Samuel Miller, Benjamin Campbell, and Stephanie Howard all emphasized this, and it resonates because it's foundational. You can't improve what you can't measure. But here's what I find most interesting—this isn't just about monitoring tools. It's about designing your failure modes intentionally, as Jose Jackson mentioned at the start. When you're choosing between Jenkins, GitHub Actions, or GitLab CI, this principle should heavily influence your decision. Which platform gives you the observability you need to catch issues before they hit production?

The Human Element Often Trumps Tooling

What strikes me most is how consistently people mention that the hardest part wasn't technical—it was organizational. Maria Jimenez, Andrew Roberts, and David Jenkins all called out stakeholder buy-in and change management as the real bottleneck. This is crucial context: your CI/CD tool choice matters far less than your ability to get teams aligned and trained. Timothy Scott's phased approach (1 month evaluation, 1 month training, ongoing knowledge sharing) with a $200K investment and 9-month payback period is exactly the kind of realistic timeline we should be planning for.

Practical Patterns I'm Seeing

Across different contexts (AWS, Kubernetes, various monitoring stacks), certain practices keep appearing:

  • Feature flags for gradual rollouts - Maria Rodriguez and others found this reduced deployment risk significantly
  • Chaos engineering in staging - Donna Jimenez and Stephanie Howard both recommend this as a way to catch issues early rather than in production
  • Documentation as a first-class artifact - Multiple teams mentioned that documentation debt is as dangerous as technical debt. Whether you're using GitBook, Notion, or Confluence, treating runbooks and decision records as critical infrastructure pays off
  • Security by design, not retrofit - Linda Morgan and James Allen both emphasized this. With Vault, AWS KMS, SOPS, or similar tools, security decisions made early prevent painful migrations later

For Those Evaluating Tools

Benjamin Rivera and David Jenkins, here's what I'd recommend focusing on based on this thread:

1. Start with your observability requirements - What dashboards do you need? What does your incident response workflow look like? Does the CI/CD platform integrate well with your alerting system (PagerDuty, Opsgenie, Slack, etc.)?

2. Plan for training and knowledge sharing - Budget time and resources here. Multiple people mentioned underestimating this, but it consistently paid off.

3. Test failure modes explicitly - Don't wait to discover these in production. Chaos engineering tests in staging (as Donna Jimenez mentioned) or production-like environments (Victoria Robinson's approach) help you understand how your chosen tool behaves under stress.

4. Consider drift detection and automated remediation - Benjamin Rivera asked about this and it's a smart question. As you scale, manual intervention becomes a bottleneck.

Questions I'm Curious About

For the group: Has anyone directly compared how Jenkins, GitHub Actions, and GitLab CI handle observability and failure mode testing? I'm wondering if there are meaningful differences in how each platform helps you instrument your pipelines and catch issues early. Also, for teams using infrastructure-as-code tools like Terraform or CDK alongside their CI/CD platform—how did that choice influence your CI/CD selection?

Donald Stewart asked excellent questions about security, canary deploy


 
Posted : 19/12/2025 6:47 pm
Page 2 / 2
Share:
Scroll to Top