Forum

Search
Close
AI Search
Classic Search
 Search Phrase:
 Search Type:
Advanced search options
 Search in Forums:
 Search in date period:

 Sort Search Results by:

AI Assistant
Part 2: Data lake a...
 
Notifications
Clear all

Part 2: Data lake architecture on AWS: S3, Glue, and Athena

11 Posts
11 Users
0 Reactions
152 Views
(@linda.morgan757)
Posts: 0
Topic starter
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 
[#279]

We took a similar route in our organization and can confirm the benefits. One thing we added was cost allocation tagging for accurate showback. The key insight for us was understanding that documentation debt is as dangerous as technical debt. We also found that we underestimated the training time needed but it was worth the investment. Happy to share more details if anyone is interested.

Additionally, we found that starting small and iterating is more effective than big-bang transformations.

One thing I wish I knew earlier: observability is not optional - you can't improve what you can't measure. Would have saved us a lot of time.

For context, we're using Istio, Linkerd, and Envoy.

For context, we're using Datadog, PagerDuty, and Slack.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 31/12/2024 10:21 pm
(@christina.gutierrez3)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

This is almost identical to what we faced. The problem: security vulnerabilities. Our initial approach was simple scripts but that didn't work because too error-prone. What actually worked: cost allocation tagging for accurate showback. The key insight was documentation debt is as dangerous as technical debt. Now we're able to detect issues early.

The end result was 99.9% availability, up from 99.5%.

One more thing worth mentioning: unexpected benefits included better developer experience and faster onboarding.


 
Posted : 02/01/2025 3:07 pm
(@katherine.edwards302)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We hit this same problem! Symptoms: frequent timeouts. Root cause analysis revealed network misconfiguration. Fix: increased pool size. Prevention measures: chaos engineering. Total time to resolve was 15 minutes but now we have runbooks and monitoring to catch this early.

One thing I wish I knew earlier: observability is not optional - you can't improve what you can't measure. Would have saved us a lot of time.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 04/01/2025 6:02 am
(@james.allen159)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

This resonates with my experience, though I'd emphasize team dynamics. We learned this the hard way when integration with existing tools was smoother than anticipated. Now we always make sure to test regularly. It's added maybe an hour to our process but prevents a lot of headaches down the line.

For context, we're using Terraform, AWS CDK, and CloudFormation.

Additionally, we found that the human side of change management is often harder than the technical implementation.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

For context, we're using Datadog, PagerDuty, and Slack.

For context, we're using Vault, AWS KMS, and SOPS.

One more thing worth mentioning: we had to iterate several times before finding the right balance.

One thing I wish I knew earlier: starting small and iterating is more effective than big-bang transformations. Would have saved us a lot of time.

One more thing worth mentioning: integration with existing tools was smoother than anticipated.


 
Posted : 06/01/2025 6:53 am
(@joyce.hughes421)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

While this is well-reasoned, I see things differently on the timeline. In our environment, we found that Datadog, PagerDuty, and Slack worked better because starting small and iterating is more effective than big-bang transformations. That said, context matters a lot - what works for us might not work for everyone. The key is to experiment and measure.

For context, we're using Kubernetes, Helm, ArgoCD, and Prometheus.

The end result was 50% reduction in deployment time.

Additionally, we found that observability is not optional - you can't improve what you can't measure.

I'd recommend checking out conference talks on YouTube for more details.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

I'd recommend checking out relevant blog posts for more details.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

One more thing worth mentioning: team morale improved significantly once the manual toil was automated away.


 
Posted : 07/01/2025 8:56 pm
(@gregory.ortiz371)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Funny timing - we just dealt with this. The problem: deployment failures. Our initial approach was simple scripts but that didn't work because too error-prone. What actually worked: compliance scanning in the CI pipeline. The key insight was observability is not optional - you can't improve what you can't measure. Now we're able to deploy with confidence.

Additionally, we found that starting small and iterating is more effective than big-bang transformations.

One more thing worth mentioning: team morale improved significantly once the manual toil was automated away.


 
Posted : 09/01/2025 11:57 am
(@donald.stewart436)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

We faced this too! Symptoms: frequent timeouts. Root cause analysis revealed connection pool exhaustion. Fix: corrected routing rules. Prevention measures: chaos engineering. Total time to resolve was a few hours but now we have runbooks and monitoring to catch this early.

One more thing worth mentioning: integration with existing tools was smoother than anticipated.

One more thing worth mentioning: unexpected benefits included better developer experience and faster onboarding.

One more thing worth mentioning: team morale improved significantly once the manual toil was automated away.


 
Posted : 10/01/2025 3:55 am
(@christopher.mitchell35)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Couldn't relate more! What we learned: Phase 1 (1 month) involved assessment and planning. Phase 2 (2 months) focused on process documentation. Phase 3 (2 weeks) was all about full rollout. Total investment was $200K but the payback period was only 3 months. Key success factors: automation, documentation, feedback loops. If I could do it again, I would set clearer success metrics.

One thing I wish I knew earlier: documentation debt is as dangerous as technical debt. Would have saved us a lot of time.


 
Posted : 10/01/2025 9:47 am
(@sara)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

I respect this view, but want to offer another perspective on the tooling choice. In our environment, we found that Elasticsearch, Fluentd, and Kibana worked better because observability is not optional - you can't improve what you can't measure. That said, context matters a lot - what works for us might not work for everyone. The key is to start small and iterate.

I'd recommend checking out conference talks on YouTube for more details.

The end result was 60% improvement in developer productivity.


 
Posted : 11/01/2025 5:11 pm
(@samuel.miller567)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

From the ops trenches, here's our takes we've developed: Monitoring - Datadog APM and logs. Alerting - custom Slack integration. Documentation - Notion for team wikis. Training - pairing sessions. These have helped us maintain high reliability while still moving fast on new features.

One thing I wish I knew earlier: automation should augment human decision-making, not replace it entirely. Would have saved us a lot of time.

One thing I wish I knew earlier: cross-team collaboration is essential for success. Would have saved us a lot of time.

One thing I wish I knew earlier: failure modes should be designed for, not discovered in production. Would have saved us a lot of time.

Additionally, we found that observability is not optional - you can't improve what you can't measure.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.


 
Posted : 13/01/2025 3:59 pm
(@jennifer.bailey132)
Posts: 0
Translate
English
Spanish
French
German
Italian
Portuguese
Russian
Chinese
Japanese
Korean
Arabic
Hindi
Dutch
Polish
Turkish
Vietnamese
Thai
Swedish
Danish
Finnish
Norwegian
Czech
Hungarian
Romanian
Greek
Hebrew
Indonesian
Malay
Ukrainian
Bulgarian
Croatian
Slovak
Slovenian
Serbian
Lithuanian
Latvian
Estonian
 

Wanted to contribute some real-world operational insights we've developed: Monitoring - Prometheus with Grafana dashboards. Alerting - PagerDuty with intelligent routing. Documentation - Confluence with templates. Training - pairing sessions. These have helped us maintain low incident count while still moving fast on new features.

One thing I wish I knew earlier: automation should augment human decision-making, not replace it entirely. Would have saved us a lot of time.

Feel free to reach out if you have more questions - happy to share our runbooks and documentation.

For context, we're using Istio, Linkerd, and Envoy.

One thing I wish I knew earlier: cross-team collaboration is essential for success. Would have saved us a lot of time.

Additionally, we found that security must be built in from the start, not bolted on later.

For context, we're using Elasticsearch, Fluentd, and Kibana.

Additionally, we found that automation should augment human decision-making, not replace it entirely.


 
Posted : 15/01/2025 8:58 am
Share:
Scroll to Top