Forum

Search
Close
AI Search
Classic Search
 Search Phrase:
 Search Type:
Advanced search options
 Search in Forums:
 Search in date period:

 Sort Search Results by:

AI Assistant
CI/CD for microserv...
 
Notifications
Clear all

CI/CD for microservices - our multi-repo vs mono-repo strategy

19 Posts
19 Users
0 Reactions
391 Views
(@opsx-tom)
Posts: 76
Member Admin
Topic starter
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 
[#74]

CI/CD for microservices - our multi-repo vs mono-repo strategy - our team is split on this decision.

Pro arguments:
- Easy to learn
- Active development
- Cloud-agnostic

Con arguments:
- Complex configuration
- Breaking changes between versions
- High operational overhead

Would love to hear from teams who've made this choice - any regrets or wins?


 
Posted : 08/09/2025 1:50 pm
(@karen.thomas72)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Love this! In our organization and can confirm the benefits. One thing we added was cost allocation tagging for accurate showback. The key insight for us was understanding that observability is not optional - you can't improve what you can't measure. We also found that team morale improved significantly once the manual toil was automated away. Happy to share more details if anyone is interested.

For context, we're using Elasticsearch, Fluentd, and Kibana.


 
Posted : 16/09/2025 5:39 am
(@deborah.howard208)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We experienced the same thing! Our takeaway was that we learned: Phase 1 (1 month) involved assessment and planning. Phase 2 (3 months) focused on team training. Phase 3 (2 weeks) was all about knowledge sharing. Total investment was $100K but the payback period was only 3 months. Key success factors: good tooling, training, patience. If I could do it again, I would involve operations earlier.

One thing I wish I knew earlier: starting small and iterating is more effective than big-bang transformations. Would have saved us a lot of time.


 
Posted : 21/09/2025 10:18 pm
(@evelyn.sanders800)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

From an implementation perspective, here are the key points. First, data residency. Second, failover strategy. Third, cost optimization. We spent significant time on testing and it was worth it. Code samples available on our GitHub if anyone wants to take a look. Performance testing showed 50% latency reduction.

For context, we're using Istio, Linkerd, and Envoy.

One more thing worth mentioning: we underestimated the training time needed but it was worth the investment.

I'd recommend checking out the official documentation for more details.


 
Posted : 24/09/2025 2:48 pm
(@mark.perez536)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Here's what operations has taught uss we've developed: Monitoring - Prometheus with Grafana dashboards. Alerting - Opsgenie with escalation policies. Documentation - Confluence with templates. Training - monthly lunch and learns. These have helped us maintain low incident count while still moving fast on new features.

For context, we're using Jenkins, GitHub Actions, and Docker.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 27/09/2025 5:18 am
(@john.long261)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Happy to share technical details from our implementation. Architecture: serverless with Lambda. Tools used: Grafana, Loki, and Tempo. Configuration highlights: GitOps with ArgoCD apps. Performance benchmarks showed 3x throughput improvement. Security considerations: zero-trust networking. We documented everything in our internal wiki - happy to share snippets if helpful.

The end result was 40% cost savings on infrastructure.

One more thing worth mentioning: the initial investment was higher than expected, but the long-term benefits exceeded our projections.


 
Posted : 27/09/2025 5:50 pm
(@michelle.gutierrez269)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Key takeaways from our implementation: 1) Document as you go 2) Implement circuit breakers 3) Practice incident response 4) Build for failure. Common mistakes to avoid: over-engineering early. Resources that helped us: Phoenix Project. The most important thing is consistency over perfection.

One thing I wish I knew earlier: automation should augment human decision-making, not replace it entirely. Would have saved us a lot of time.

One thing I wish I knew earlier: cross-team collaboration is essential for success. Would have saved us a lot of time.


 
Posted : 01/10/2025 7:23 pm
(@dennis.king704)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Interesting points, but let me offer a counterargument on the team structure. In our environment, we found that Vault, AWS KMS, and SOPS worked better because observability is not optional - you can't improve what you can't measure. That said, context matters a lot - what works for us might not work for everyone. The key is to experiment and measure.

For context, we're using Istio, Linkerd, and Envoy.

Additionally, we found that failure modes should be designed for, not discovered in production.


 
Posted : 05/10/2025 12:05 am
 Paul
(@paul)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Our experience from start to finish with this. We started about 24 months ago with a small pilot. Initial challenges included team training. The breakthrough came when we simplified the architecture. Key metrics improved: 80% reduction in security vulnerabilities. The team's feedback has been overwhelmingly positive, though we still have room for improvement in monitoring depth. Lessons learned: communicate often. Next steps for us: add more automation.

One more thing worth mentioning: the hardest part was getting buy-in from stakeholders outside engineering.


 
Posted : 10/10/2025 9:33 pm
(@christina.gutierrez3)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Here's how our journey unfolded with this. We started about 10 months ago with a small pilot. Initial challenges included legacy compatibility. The breakthrough came when we automated the testing. Key metrics improved: 50% reduction in deployment time. The team's feedback has been overwhelmingly positive, though we still have room for improvement in testing coverage. Lessons learned: communicate often. Next steps for us: expand to more teams.

The end result was 40% cost savings on infrastructure.


 
Posted : 11/10/2025 10:08 am
(@maria.jimenez673)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Perfect timing! We're currently evaluating this approach. Could you elaborate on success metrics? Specifically, I'm curious about team training approach. Also, how long did the initial implementation take? Any gotchas we should watch out for?

I'd recommend checking out the official documentation for more details.

Additionally, we found that cross-team collaboration is essential for success.

Additionally, we found that starting small and iterating is more effective than big-bang transformations.


 
Posted : 13/10/2025 10:38 pm
(@ruth.white53)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

I respect this view, but want to offer another perspective on the team structure. In our environment, we found that Terraform, AWS CDK, and CloudFormation worked better because security must be built in from the start, not bolted on later. That said, context matters a lot - what works for us might not work for everyone. The key is to invest in training.

One thing I wish I knew earlier: observability is not optional - you can't improve what you can't measure. Would have saved us a lot of time.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 21/10/2025 4:08 pm
(@jose.williams694)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Can confirm from our side. The most important factor was the human side of change management is often harder than the technical implementation. We initially struggled with team resistance but found that chaos engineering tests in staging worked well. The ROI has been significant - we've seen 2x improvement.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

One more thing worth mentioning: we discovered several hidden dependencies during the migration.


 
Posted : 23/10/2025 12:09 pm
(@donna.jimenez105)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

This is exactly the kind of detail that helps! I have a few questions: 1) How did you handle testing? 2) What was your approach to rollback? 3) Did you encounter any issues with costs? We're considering a similar implementation and would love to learn from your experience.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

The end result was 3x increase in deployment frequency.

Additionally, we found that cross-team collaboration is essential for success.


 
Posted : 26/10/2025 2:02 am
(@stephanie.howard98)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

I can offer some technical insights from our implementation. Architecture: serverless with Lambda. Tools used: Elasticsearch, Fluentd, and Kibana. Configuration highlights: CI/CD with GitHub Actions workflows. Performance benchmarks showed 99.99% availability. Security considerations: secrets management with Vault. We documented everything in our internal wiki - happy to share snippets if helpful.

One thing I wish I knew earlier: failure modes should be designed for, not discovered in production. Would have saved us a lot of time.


 
Posted : 28/10/2025 12:34 pm
Page 1 / 2
Share:
Scroll to Top