New release: Jailer Database Tools
# Jailer Database Tools.
Jailer is a tool for database subsetting and relational data browsing.
It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.
# Features
Exports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.
Improves database performance by removing and archiving obsolete data without violating integrity.
Generates topologically sorted SQL-DML, hierarchically structured JSON, JAML, XML and DbUnit datasets.
Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.
SQL Console with code completion, syntax highlighting and database metadata visualization.
A demo database is included with which you can get a first impression without any configuration effort.Jailer Database Tools.Jailer is a tool for database subsetting and relational data browsing.It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.FeaturesExports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.Improves database performance by removing and archiving obsolete data without violating integrity.Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.SQL Console with code completion, syntax highlighting and database metadata visualization.A demo database is included with which you can get a first impression without any configuration effort.
https://redd.it/1gbnhqe
@r_devops
# Jailer Database Tools.
Jailer is a tool for database subsetting and relational data browsing.
It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.
# Features
Exports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.
Improves database performance by removing and archiving obsolete data without violating integrity.
Generates topologically sorted SQL-DML, hierarchically structured JSON, JAML, XML and DbUnit datasets.
Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.
SQL Console with code completion, syntax highlighting and database metadata visualization.
A demo database is included with which you can get a first impression without any configuration effort.Jailer Database Tools.Jailer is a tool for database subsetting and relational data browsing.It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.FeaturesExports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.Improves database performance by removing and archiving obsolete data without violating integrity.Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.SQL Console with code completion, syntax highlighting and database metadata visualization.A demo database is included with which you can get a first impression without any configuration effort.
https://redd.it/1gbnhqe
@r_devops
wisser.github.io
Open Jail - The Jailer Project Web Site
Data Export Tool
PagerDuty not great for small teams?
Not sure if I’m missing something here, but it seems like PagerDuty really isn’t built for smaller teams? I just recently broke up what was more or less a monolithic escalation policy where everyone on the schedule was more or less on call all the time and issues could be escalated to the same person if they didn’t ack, to smaller Escalation Policies and Schedules. Basically 3ish people per schedule.
PagerDuty recommends creating a primary and secondary schedule but, how’s that supposed to work with three people? Ideally I’d define primary and then secondary would be defined as an offset of that. Page primary, escalate to whoever is on deck to be on call next. It could work with the existing guidance, but all the people would have to be in both and then the offset would have to be managed manually. And then, if someone overrides in primary and doesn’t also make a similar override in secondary, you could end up with primary and secondary being the same person.
What I really want is an escalation policy that alarms to a team schedule, escalates through everyone there first, and then hits my team as a backup. Right now if the on call for that team doesn’t ack it jumps straight to me and I have to manually kick it to the next person on the schedule.
Am I missing something or is PagerDuty really just assuming that a team would have 6ish people with two full primary and secondary rotations?
https://redd.it/1gbn2dw
@r_devops
Not sure if I’m missing something here, but it seems like PagerDuty really isn’t built for smaller teams? I just recently broke up what was more or less a monolithic escalation policy where everyone on the schedule was more or less on call all the time and issues could be escalated to the same person if they didn’t ack, to smaller Escalation Policies and Schedules. Basically 3ish people per schedule.
PagerDuty recommends creating a primary and secondary schedule but, how’s that supposed to work with three people? Ideally I’d define primary and then secondary would be defined as an offset of that. Page primary, escalate to whoever is on deck to be on call next. It could work with the existing guidance, but all the people would have to be in both and then the offset would have to be managed manually. And then, if someone overrides in primary and doesn’t also make a similar override in secondary, you could end up with primary and secondary being the same person.
What I really want is an escalation policy that alarms to a team schedule, escalates through everyone there first, and then hits my team as a backup. Right now if the on call for that team doesn’t ack it jumps straight to me and I have to manually kick it to the next person on the schedule.
Am I missing something or is PagerDuty really just assuming that a team would have 6ish people with two full primary and secondary rotations?
https://redd.it/1gbn2dw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you guys track your deployments when doing configuration managment?
We are currently discussing migrating away from our current tool stack which consists of TFS. (For political and financial reasons).
We use it to host our code, build create and host our artifacts.
We can easily create a release with specific build artifacts and deploy it through agents using PowerShell.
We have around 100 different customer that we manage. Each customer, has between 2 and 4 'stages' (dev/int/prd for example) and we have a total of 4000 tests that gets execute par deployment per customer.
In the end, we have almost half a million of tests that run to ensure that our artifacts are correctly installed and configured.
Since we need to migrate, we have been evaluating GitLab, but we realized that it is not 'as complete' as TFS.
Especially the deployment part. It looks there that gitlab is only intended for smaller number of environments.
In addition to that, displaying the resulted tests, or just the pipeline run really doesn't scale and defeintly lacks some user friendlyness.
I was wondering how guys in other places hanlde this type of scenarios. I feel like we will not be able to find a similar product, and that it would be more of a 'agregation' of several products that would allow us to do this.
I would be curious to hear how you:
\- Deploy stuff onto your environments (Ansible ? DSC / Chef / puttet / something else ?)
\- how do you guys keep 'visually track' of what and where it passed / failed (Nice looking graphs with green & red )
Cheers
https://redd.it/1gbofud
@r_devops
We are currently discussing migrating away from our current tool stack which consists of TFS. (For political and financial reasons).
We use it to host our code, build create and host our artifacts.
We can easily create a release with specific build artifacts and deploy it through agents using PowerShell.
We have around 100 different customer that we manage. Each customer, has between 2 and 4 'stages' (dev/int/prd for example) and we have a total of 4000 tests that gets execute par deployment per customer.
In the end, we have almost half a million of tests that run to ensure that our artifacts are correctly installed and configured.
Since we need to migrate, we have been evaluating GitLab, but we realized that it is not 'as complete' as TFS.
Especially the deployment part. It looks there that gitlab is only intended for smaller number of environments.
In addition to that, displaying the resulted tests, or just the pipeline run really doesn't scale and defeintly lacks some user friendlyness.
I was wondering how guys in other places hanlde this type of scenarios. I feel like we will not be able to find a similar product, and that it would be more of a 'agregation' of several products that would allow us to do this.
I would be curious to hear how you:
\- Deploy stuff onto your environments (Ansible ? DSC / Chef / puttet / something else ?)
\- how do you guys keep 'visually track' of what and where it passed / failed (Nice looking graphs with green & red )
Cheers
https://redd.it/1gbofud
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Flox, a better alternative to Dev Containers
Hi my fellow DevOps,
I often have to setup dev environment for teams and projects I work with so I decided to write a short introduction on Flox which really hits the spot - especially compared to Dev Containers.
➡️ https://medium.com/@pierre_49652/flox-better-alternative-to-dev-containers-d02e1a2ec423
Let me know what you think :)
https://redd.it/1gbpbzp
@r_devops
Hi my fellow DevOps,
I often have to setup dev environment for teams and projects I work with so I decided to write a short introduction on Flox which really hits the spot - especially compared to Dev Containers.
➡️ https://medium.com/@pierre_49652/flox-better-alternative-to-dev-containers-d02e1a2ec423
Let me know what you think :)
https://redd.it/1gbpbzp
@r_devops
Medium
Flox: better alternative to Dev Containers
I was still in pain with my development environment setup despite using Dev Containers — then I discovered Flox.
Retrieving TenantID and ClientID from the Service Connection
Hi there,
In short, I found some articles on the internet which claim it should be possible to retrieve things as the ClientId and TenantId from the Service Connection that you specify in your main.yaml
This way I wouldn't have to put these into any variables file, or in the script themselves.
addSpnToEnvironment: true
$env:AZURETENANTID
$env:AZURECLIENTID
However, having tried to put this into the main.yaml, I can't seem to be able to use these variables.
When I use a Write-Host these variables come up empty.
Currently my main.yaml looks like this:
- task: AzureCLI@2
inputs:
azureSubscription: 'Repo-EntraID'
scriptType: 'ps'
addSpnToEnvironment: true
scriptLocation: 'inlineScript'
inlineScript: |
.\SendMailMessage\SendMailMessage.ps1 -AccessToken $env:AZUREACCESSTOKEN -TenantId $env:AZURETENANTID -ClientId $env:AZURECLIENTID
displayName: 'Send Email using Microsoft Graph and Service Connection'
Does anyone know how exactly I can get these variables from the Service Connection into my Powershell script?
Other then people (and Microsoft) mentioning that you can, I can't seem to find out how exactly.
Thanks in advance for anyone who can shed a light on this :-)
https://redd.it/1gbtznh
@r_devops
Hi there,
In short, I found some articles on the internet which claim it should be possible to retrieve things as the ClientId and TenantId from the Service Connection that you specify in your main.yaml
This way I wouldn't have to put these into any variables file, or in the script themselves.
addSpnToEnvironment: true
$env:AZURETENANTID
$env:AZURECLIENTID
However, having tried to put this into the main.yaml, I can't seem to be able to use these variables.
When I use a Write-Host these variables come up empty.
Currently my main.yaml looks like this:
- task: AzureCLI@2
inputs:
azureSubscription: 'Repo-EntraID'
scriptType: 'ps'
addSpnToEnvironment: true
scriptLocation: 'inlineScript'
inlineScript: |
.\SendMailMessage\SendMailMessage.ps1 -AccessToken $env:AZUREACCESSTOKEN -TenantId $env:AZURETENANTID -ClientId $env:AZURECLIENTID
displayName: 'Send Email using Microsoft Graph and Service Connection'
Does anyone know how exactly I can get these variables from the Service Connection into my Powershell script?
Other then people (and Microsoft) mentioning that you can, I can't seem to find out how exactly.
Thanks in advance for anyone who can shed a light on this :-)
https://redd.it/1gbtznh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Has anyone got a CISSP cert?
I am thinking about expanding my skill set and exploring some security engineering, I have a heavy sys admin and DevOps background, cloud experience and all the DevOps things. I am just wondering if anyone has any experience walking this path that I can learn from.
https://redd.it/1gbp3az
@r_devops
I am thinking about expanding my skill set and exploring some security engineering, I have a heavy sys admin and DevOps background, cloud experience and all the DevOps things. I am just wondering if anyone has any experience walking this path that I can learn from.
https://redd.it/1gbp3az
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Canary deployment
Need help with an issue with canary deployment using flagger. Does anyone have handson experience with it? Need urgent assistance:(
https://redd.it/1gbwoii
@r_devops
Need help with an issue with canary deployment using flagger. Does anyone have handson experience with it? Need urgent assistance:(
https://redd.it/1gbwoii
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Canary deployment issue
I am facing an issue with canary deployment using flagger. Would really appreciate any suggestions. More about the issue in comments.
https://redd.it/1gby1dj
@r_devops
I am facing an issue with canary deployment using flagger. Would really appreciate any suggestions. More about the issue in comments.
https://redd.it/1gby1dj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need handson projects for devops
Guys need help in gaining handson experience from end to end pipelines including kubernetes terraform docker jenkins/gitlabcicd please help me
https://redd.it/1gbwlcp
@r_devops
Guys need help in gaining handson experience from end to end pipelines including kubernetes terraform docker jenkins/gitlabcicd please help me
https://redd.it/1gbwlcp
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best practice for organizing test mocks/stubs in a monorepo?
I have a Turborepo monorepo with two apps - a React + Vite frontend and a Fastify REST API. All shared packages are configured as native ES modules (`"type": "module"`) and have `sideEffects: false` since they only contain types, schemas, and constants.
I need to add test mocks/stubs for my types and schemas, and I'm trying to decide the best way to structure this. Should they live next to their types, or in a separate testing package?
Here's what I mean:
Option 1: Co-located mocks
import { ApiResponse, apiResponseStub } from '@acme/contract';
import { User, userStub } from '@acme/database';
import { Config, configStub } from '@acme/common';
Option 2: Separate testing package
import { ApiResponse } from '@acme/contract';
import { User } from '@acme/database';
import { Config } from '@acme/common';
import {
apiResponseStub,
userStub,
configStub
} from '@acme/testing';
While co-locating stubs next to their types/schemas feels a lot easier, I have some concerns:
1. Tree-shaking reliability: Even with `sideEffects: false`, can I trust that test code won't leak into production builds?
2. Package structure: If I go with a separate testing package, how should I organize it?
Appreciate any input I can get on this :)
https://redd.it/1gc0qst
@r_devops
I have a Turborepo monorepo with two apps - a React + Vite frontend and a Fastify REST API. All shared packages are configured as native ES modules (`"type": "module"`) and have `sideEffects: false` since they only contain types, schemas, and constants.
I need to add test mocks/stubs for my types and schemas, and I'm trying to decide the best way to structure this. Should they live next to their types, or in a separate testing package?
Here's what I mean:
Option 1: Co-located mocks
import { ApiResponse, apiResponseStub } from '@acme/contract';
import { User, userStub } from '@acme/database';
import { Config, configStub } from '@acme/common';
Option 2: Separate testing package
import { ApiResponse } from '@acme/contract';
import { User } from '@acme/database';
import { Config } from '@acme/common';
import {
apiResponseStub,
userStub,
configStub
} from '@acme/testing';
While co-locating stubs next to their types/schemas feels a lot easier, I have some concerns:
1. Tree-shaking reliability: Even with `sideEffects: false`, can I trust that test code won't leak into production builds?
2. Package structure: If I go with a separate testing package, how should I organize it?
Appreciate any input I can get on this :)
https://redd.it/1gc0qst
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
CLion with Docker toolchain: "The file does not belong to any project target; code insight features may not work properly"
I'm trying to adapt my development workflow to make use of Docker containers for local development.
I'm having a hell of a time trying to get CLion to be configured correctly. The full details of the problem are posted to StackOverflow if anyone is interested in contributing.
See StackOverflow post
https://redd.it/1gc2fkd
@r_devops
I'm trying to adapt my development workflow to make use of Docker containers for local development.
I'm having a hell of a time trying to get CLion to be configured correctly. The full details of the problem are posted to StackOverflow if anyone is interested in contributing.
See StackOverflow post
https://redd.it/1gc2fkd
@r_devops
Stack Overflow
CLion with Docker toolchain: "The file does not belong to any project target; code insight features may not work properly"
I'm trying to set up a local development environment for my C++-based project using Docker and CLion. I want CLion to recognize the libraries installed inside the Docker container and provide full ...
Trace your application with OpenTelemetry and Jaeger
Trace your application with OpenTelemetry and Jaeger
https://medium.com/@rasvihostings/trace-your-application-with-opentelemetry-and-jaeger-109fb0420b3b
#gke #k8s #openTelemetry #sre #observability #python
https://redd.it/1gc4c0t
@r_devops
Trace your application with OpenTelemetry and Jaeger
https://medium.com/@rasvihostings/trace-your-application-with-opentelemetry-and-jaeger-109fb0420b3b
#gke #k8s #openTelemetry #sre #observability #python
https://redd.it/1gc4c0t
@r_devops
Medium
Trace your application with OpenTelemetry and Jaeger
I’ll help you create three microservices with OpenTelemetry integration and deploy them to Google Kubernetes Engine (GKE).
Question for the devops folks
Dear DevOps Engineer, I have a question about deploying Docker images in Kubernetes. When I build an image, push it to a registry, and then pull it with Kubernetes, how does it get an IP address to make it accessible via a domain like www.example.com? Also, in my front end, I specify the API URL in an .env file. How can I know the correct API URL to use once it’s deployed to the cloud? I understand Kubernetes uses services, but could you explain how this setup works in a cloud environment?
https://redd.it/1gc6etv
@r_devops
Dear DevOps Engineer, I have a question about deploying Docker images in Kubernetes. When I build an image, push it to a registry, and then pull it with Kubernetes, how does it get an IP address to make it accessible via a domain like www.example.com? Also, in my front end, I specify the API URL in an .env file. How can I know the correct API URL to use once it’s deployed to the cloud? I understand Kubernetes uses services, but could you explain how this setup works in a cloud environment?
https://redd.it/1gc6etv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What’s in demand now?
Kubernetes and hyperscaler experience seems pretty easy to find now, and salaries aren't spectacular anymore. Are there any must have skills anymore, or is it all downhill from here as it becomes more saturated? Genuinely curious as to what comes next.
https://redd.it/1gc7qbj
@r_devops
Kubernetes and hyperscaler experience seems pretty easy to find now, and salaries aren't spectacular anymore. Are there any must have skills anymore, or is it all downhill from here as it becomes more saturated? Genuinely curious as to what comes next.
https://redd.it/1gc7qbj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Looking for webapp hosting recommendations - details in post
Hi /r/devops! Not sure if this sort of post belongs here, but figured I'd give it a shot :)
I often have personal programming projects, and sometimes they're webapps. I'm trying to figure out what's a good place to host them.
So far I've used Heroku. I know you can get cheaper (particularly with usage-based costs like AWS instead of flat costs like Heroku), but $12/mo ($7 for cheapest backend "dyno", $5 for cheapest DB) is reasonable for one project for me. Convenience is good.
However, it's nice to be able to have multiple projects going in parallel, and to try new projects out at will, without having to think too much about "is this worth spending money on" or worry about forgetting to spin down unused projects
I imagine I can do some multiplexing, so multiple projects don't cost more than one project. (It's ok if one of my projects causes all of them to crash. These are just toys.) But I don't think I can do this with Heroku. Specifically: I can probably run multiple projects' backends on one dyno, but for databases -- Heroku Postgres doesn't give you permissions to use CREATE DATABASE, alas
Any ideas?
Maybe DigitalOcean? AWS? Dokku + DigitalOcean or something? Something else?
A few more thoughts / requirements:
- Architecture is always simple, no weird needs there (a frontend, a backend, a database)
- Traffic for my projects is very low -- it's just me experimenting, usually I'm the only user, not projects I publish to the public or anything. If I ever have something that needs more traffic, I can figure out what platform to use then -- right now I just wanna figure out what platform I want to use for cheap & easy experimentation
- I want to minimize the risk of me accidentally doing something dumb and then ending up with a bill for thousands of dollars
- I want to minimize or eliminate the risk of someone scraping or DOS-ing my site and then I end up with a big bill -- so cut off my bandwidth or crash my server rather than giving me overage charges or autoscaling, etc
https://redd.it/1gc42w1
@r_devops
Hi /r/devops! Not sure if this sort of post belongs here, but figured I'd give it a shot :)
I often have personal programming projects, and sometimes they're webapps. I'm trying to figure out what's a good place to host them.
So far I've used Heroku. I know you can get cheaper (particularly with usage-based costs like AWS instead of flat costs like Heroku), but $12/mo ($7 for cheapest backend "dyno", $5 for cheapest DB) is reasonable for one project for me. Convenience is good.
However, it's nice to be able to have multiple projects going in parallel, and to try new projects out at will, without having to think too much about "is this worth spending money on" or worry about forgetting to spin down unused projects
I imagine I can do some multiplexing, so multiple projects don't cost more than one project. (It's ok if one of my projects causes all of them to crash. These are just toys.) But I don't think I can do this with Heroku. Specifically: I can probably run multiple projects' backends on one dyno, but for databases -- Heroku Postgres doesn't give you permissions to use CREATE DATABASE, alas
Any ideas?
Maybe DigitalOcean? AWS? Dokku + DigitalOcean or something? Something else?
A few more thoughts / requirements:
- Architecture is always simple, no weird needs there (a frontend, a backend, a database)
- Traffic for my projects is very low -- it's just me experimenting, usually I'm the only user, not projects I publish to the public or anything. If I ever have something that needs more traffic, I can figure out what platform to use then -- right now I just wanna figure out what platform I want to use for cheap & easy experimentation
- I want to minimize the risk of me accidentally doing something dumb and then ending up with a bill for thousands of dollars
- I want to minimize or eliminate the risk of someone scraping or DOS-ing my site and then I end up with a big bill -- so cut off my bandwidth or crash my server rather than giving me overage charges or autoscaling, etc
https://redd.it/1gc42w1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best practices for updated dev toolchain for rhel8 box w/o using a container
Working on rhel8 box with gcc8 and would like to build with newer compiler & tools (emacs, nodejs, clang, ...). I could use a dev container but we're not ready for container deployments and don't care to have that battle yet. I've been other places where they would build all their own tools and you'd just add that to your path (i.e. would have a /devtools/X.XX/ dir on the box with everything installed underneath it).
Any other suggestions?
https://redd.it/1gcd5vb
@r_devops
Working on rhel8 box with gcc8 and would like to build with newer compiler & tools (emacs, nodejs, clang, ...). I could use a dev container but we're not ready for container deployments and don't care to have that battle yet. I've been other places where they would build all their own tools and you'd just add that to your path (i.e. would have a /devtools/X.XX/ dir on the box with everything installed underneath it).
Any other suggestions?
https://redd.it/1gcd5vb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Software lifecycle book reccomendations
What are some good books on the software lifecycle that focus more on the more practical aspect of the topic.
https://redd.it/1gcdrao
@r_devops
What are some good books on the software lifecycle that focus more on the more practical aspect of the topic.
https://redd.it/1gcdrao
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to build a chat bot with a custom knowledge base? Guide me.
I am starting out to build web apps using AI.
I want to build a tool to convert pdf to csv and integrate a chatbot to interact with the knowledge ( i.e converted CSVs).
Can you guide me how can I build this ?
Any open-source tools that have similar structure will be a lot helpful.
https://redd.it/1gcetg9
@r_devops
I am starting out to build web apps using AI.
I want to build a tool to convert pdf to csv and integrate a chatbot to interact with the knowledge ( i.e converted CSVs).
Can you guide me how can I build this ?
Any open-source tools that have similar structure will be a lot helpful.
https://redd.it/1gcetg9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What Does Your Day Look Like as an Infrastructure Engineer? Seeking Insights!
Hey there,
Are there any Infrastructure Engineers here? What does a typical day at the office look like for you? What are your main responsibilities, and which skills or tools are essential for your role?
I'm currently working as a trainee Infrastructure Engineer, and I'm gaining exposure to various areas like databases, IIS, cloud, networking, and servers—primarily on Windows. Our team is also expanding into Linux, along with technologies like Kubernetes, Kafka, Nginx, and more. I'd love to hear about your experiences!
https://redd.it/1gcg3ld
@r_devops
Hey there,
Are there any Infrastructure Engineers here? What does a typical day at the office look like for you? What are your main responsibilities, and which skills or tools are essential for your role?
I'm currently working as a trainee Infrastructure Engineer, and I'm gaining exposure to various areas like databases, IIS, cloud, networking, and servers—primarily on Windows. Our team is also expanding into Linux, along with technologies like Kubernetes, Kafka, Nginx, and more. I'd love to hear about your experiences!
https://redd.it/1gcg3ld
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Python vs Bash Scripting.
Many DevOps jobs needs someone to be proficient in these skills efficiently.
Lately I have been concentrating more on upskilling more on bash scripting because I am feeling it's more common with the tasks I do lately.
Everything DevOps guys do is mostly on the commandline.
In Python or Bash Scripting which is more important or pertinent to our jobs as a DevOps Engineer.
https://redd.it/1gch4yw
@r_devops
Many DevOps jobs needs someone to be proficient in these skills efficiently.
Lately I have been concentrating more on upskilling more on bash scripting because I am feeling it's more common with the tasks I do lately.
Everything DevOps guys do is mostly on the commandline.
In Python or Bash Scripting which is more important or pertinent to our jobs as a DevOps Engineer.
https://redd.it/1gch4yw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What are you recommendations
I know I have to get a grasp of programming in Python or go and even bash scripting.
I have a little experience and I think I can learn and do things if I embark on projects straight away as a DevOps engineer.
My question is, do I jump on project immediately and find my fit. Like from a beginner-intermediate-advanced style or
Follow the popular roadmaps on the web by first learning Linux which I can't learn it all to other tools before doing projects.
What do you recommend guys?
https://redd.it/1gch0zo
@r_devops
I know I have to get a grasp of programming in Python or go and even bash scripting.
I have a little experience and I think I can learn and do things if I embark on projects straight away as a DevOps engineer.
My question is, do I jump on project immediately and find my fit. Like from a beginner-intermediate-advanced style or
Follow the popular roadmaps on the web by first learning Linux which I can't learn it all to other tools before doing projects.
What do you recommend guys?
https://redd.it/1gch0zo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community