Hey can anyone tell me the day to day tasks of an DevOps engineer?
I am trying to learn devops but I don’t see any detailed videos. I would like to know the daily tasks, what to learn? Thanks please feel free to reach out directly as well! Thanks 🙏
https://redd.it/zn2d1q
@r_devops
I am trying to learn devops but I don’t see any detailed videos. I would like to know the daily tasks, what to learn? Thanks please feel free to reach out directly as well! Thanks 🙏
https://redd.it/zn2d1q
@r_devops
reddit
Hey can anyone tell me the day to day tasks of an DevOps engineer?
I am trying to learn devops but I don’t see any detailed videos. I would like to know the daily tasks, what to learn? Thanks please feel free to...
Help with bots
I’d like to set up some bot to make is Reddit work for me and my friends and Set challenges. /NumberFiles
https://redd.it/zn8hgm
@r_devops
I’d like to set up some bot to make is Reddit work for me and my friends and Set challenges. /NumberFiles
https://redd.it/zn8hgm
@r_devops
reddit
Help with bots
I’d like to set up some bot to make is Reddit work for me and my friends and Set challenges. /NumberFiles
Separate git repository just for devops?
We decided to have a microservice architecture and multiple microservices split (by functionality) into their own git and GitHub repositories.
The project just started and currently only one microservice (which needs to be split as well) with one repository exists. The repository contains everything related to GitHub CI/CD pipelines, helm charts, IaC templates and much more stuff, which is not necessarily related to this microservice, but more on the general side of things regarding devops (certificate management, certificate requests etc.). Copying this into every repository doesn't sound manageable to many in the long run (changing something in one repo regarding CI/CD, will most likely require the same change in all the other repos etc.).
I'm currently thinking to create a separate devops repository, move the GitHub actions, IaC templates (which are not related to a specific microservice) etc. into this repository. Each microservice repository would only provide values.yaml (anything related to the microservice itself) and minimal code to call the GitHub actions from the devops repository. The devops repository would provide helm charts, which can be parametrized by the values.yaml each microservice repository provides.
There are also other reasons why I tend to make use of a separate devops repository, which I unfortunately can't go into details about. To sum it up, I would like to put everything in there, which is not microservice related.
Does anyone have experience regarding this? Is it a bad idea?
https://redd.it/zna1ni
@r_devops
We decided to have a microservice architecture and multiple microservices split (by functionality) into their own git and GitHub repositories.
The project just started and currently only one microservice (which needs to be split as well) with one repository exists. The repository contains everything related to GitHub CI/CD pipelines, helm charts, IaC templates and much more stuff, which is not necessarily related to this microservice, but more on the general side of things regarding devops (certificate management, certificate requests etc.). Copying this into every repository doesn't sound manageable to many in the long run (changing something in one repo regarding CI/CD, will most likely require the same change in all the other repos etc.).
I'm currently thinking to create a separate devops repository, move the GitHub actions, IaC templates (which are not related to a specific microservice) etc. into this repository. Each microservice repository would only provide values.yaml (anything related to the microservice itself) and minimal code to call the GitHub actions from the devops repository. The devops repository would provide helm charts, which can be parametrized by the values.yaml each microservice repository provides.
There are also other reasons why I tend to make use of a separate devops repository, which I unfortunately can't go into details about. To sum it up, I would like to put everything in there, which is not microservice related.
Does anyone have experience regarding this? Is it a bad idea?
https://redd.it/zna1ni
@r_devops
reddit
Separate git repository just for devops?
We decided to have a microservice architecture and multiple microservices split (by functionality) into their own git and GitHub...
Question: What tools or technologies are you looking into lately ?
I have been heads down in AWS specific stuff but I’d figured I check in what’s popular these days.
The winter break is around the corner, and I jus want to add a few more to my list :D
I am curious what technologies or tools are ya’ll are looking into these days ( (or plan on looking into soon) ?
**What’s on my list (to look into):**
* SOPS / Sealed Secrets
* Terraform CDK
* SST (serverless framework)
* Kubernetes (definitely late to the party)
* Some AWS Data related services (ie Kinesis)
https://redd.it/zn96y9
@r_devops
I have been heads down in AWS specific stuff but I’d figured I check in what’s popular these days.
The winter break is around the corner, and I jus want to add a few more to my list :D
I am curious what technologies or tools are ya’ll are looking into these days ( (or plan on looking into soon) ?
**What’s on my list (to look into):**
* SOPS / Sealed Secrets
* Terraform CDK
* SST (serverless framework)
* Kubernetes (definitely late to the party)
* Some AWS Data related services (ie Kinesis)
https://redd.it/zn96y9
@r_devops
reddit
Question: What tools or technologies are you looking into lately ?
I have been heads down in AWS specific stuff but I’d figured I check in what’s popular these days. The winter break is around the corner, and I...
Life after Nx
I don't have much experience with monorepos, but recently worked on a project that used this paradigm and used Nx to manage it. I actually found it quite productive, and was impressed with the way Nx effectively handled multiple projects. In particular I found the caching and the "affected" mechanism very effective, and the ability to create custom generators was quite helpful.
I'm thinking about adopting a monorepo paradigm for an upcoming side project, but trying to feel out if there's a better option. I don't have any specific complaints about Nx, but it is very JS/TS-centric and my project will involve a lot of Rust and Python sub projects. I understand that Nx can still handle these, but is there something more suitable? Some searching leads me to believe the main competitors might be Bazel and Lerna, but I lack experience with either. Looking for opinions on the best language-agnostic tool for managing monorepos.
https://redd.it/zncba6
@r_devops
I don't have much experience with monorepos, but recently worked on a project that used this paradigm and used Nx to manage it. I actually found it quite productive, and was impressed with the way Nx effectively handled multiple projects. In particular I found the caching and the "affected" mechanism very effective, and the ability to create custom generators was quite helpful.
I'm thinking about adopting a monorepo paradigm for an upcoming side project, but trying to feel out if there's a better option. I don't have any specific complaints about Nx, but it is very JS/TS-centric and my project will involve a lot of Rust and Python sub projects. I understand that Nx can still handle these, but is there something more suitable? Some searching leads me to believe the main competitors might be Bazel and Lerna, but I lack experience with either. Looking for opinions on the best language-agnostic tool for managing monorepos.
https://redd.it/zncba6
@r_devops
reddit
Life after Nx
I don't have much experience with monorepos, but recently worked on a project that used this paradigm and used Nx to manage it. I actually found...
alert for self sign certs
hi folks
I created a python script that runs on concourse(pipeline) to alert us if any of the self signing ssl certs is going to expire soon. My program manager was not satisfied this and felt I'm using pipeline for the wrong reason
I choose python on concourse as the other solutions I explored required paid solutions or some new tech like NAGIOS I'm not experienced it. So I have two questions
1. was I wrong to run python script as a cron job in concourse pipeline
2. is there any solution other than NAGIOS worth exploring?
https://redd.it/zndh8r
@r_devops
hi folks
I created a python script that runs on concourse(pipeline) to alert us if any of the self signing ssl certs is going to expire soon. My program manager was not satisfied this and felt I'm using pipeline for the wrong reason
I choose python on concourse as the other solutions I explored required paid solutions or some new tech like NAGIOS I'm not experienced it. So I have two questions
1. was I wrong to run python script as a cron job in concourse pipeline
2. is there any solution other than NAGIOS worth exploring?
https://redd.it/zndh8r
@r_devops
reddit
alert for self sign certs
hi folks I created a python script that runs on concourse(pipeline) to alert us if any of the self signing ssl certs is going to expire soon. My...
DevOps and Feature Flags
I would like to understand the role of Feature Flags in DevOps function
i. Do you "create & toggle" feature flags or "only toggle" feature flags?
ii. What all use cases does feature flag help you with?
https://redd.it/zncvl9
@r_devops
I would like to understand the role of Feature Flags in DevOps function
i. Do you "create & toggle" feature flags or "only toggle" feature flags?
ii. What all use cases does feature flag help you with?
https://redd.it/zncvl9
@r_devops
reddit
DevOps and Feature Flags
I would like to understand the role of Feature Flags in DevOps function i. Do you "create & toggle" feature flags or "only toggle" feature...
What security controls to prevent someone from pushing arbitrary code into production?
What is the typical process before something is pushed into live? Just as if you were to push code into a repo, it will be reviewed by individuals before being approved, does this happen in DevOps automation tools?
1. Does this happen with the likes of Octopus DevOps or Jenkins or Azure DevOps or any tools if you could share that you use in enterprise environments?
2. What steps are taken to ensure someone cannot accidentally or even maliciously push something bad into live?
3. We have a policy that at least high-severity vulnerabilities are not allowed to go live? Broadly, what kind of processes can you set up to track/audit this? I appreciate things like false positive findings and risk acceptance can be done here to shoot it into live because often times a lot of vulnerabilities are nonsense and noisy. But how is this done, is this flagged somewhere on before you press live: "Are you sure you want to go live" sort of thing? I have no idea.
https://redd.it/zngalh
@r_devops
What is the typical process before something is pushed into live? Just as if you were to push code into a repo, it will be reviewed by individuals before being approved, does this happen in DevOps automation tools?
1. Does this happen with the likes of Octopus DevOps or Jenkins or Azure DevOps or any tools if you could share that you use in enterprise environments?
2. What steps are taken to ensure someone cannot accidentally or even maliciously push something bad into live?
3. We have a policy that at least high-severity vulnerabilities are not allowed to go live? Broadly, what kind of processes can you set up to track/audit this? I appreciate things like false positive findings and risk acceptance can be done here to shoot it into live because often times a lot of vulnerabilities are nonsense and noisy. But how is this done, is this flagged somewhere on before you press live: "Are you sure you want to go live" sort of thing? I have no idea.
https://redd.it/zngalh
@r_devops
reddit
What security controls to prevent someone from pushing arbitrary...
What is the typical process before something is pushed into live? Just as if you were to push code into a repo, it will be reviewed by individuals...
Beginner's guide on how to set up a new project with proper CI/CD pipeline and containers
Good CI/CD practices and proper containerization are at the core of the best DevOps teams. Mounting them later in the lifecycle of a project is time-consuming and expensive.
In order to give people an idea about the possibilities and how little the initial investment in terms of time is, we wrote an article containing a step-by-step guide to get started with a simple React application, containerized, and set up with docker-compose for local development. How to insert basic checks into the CI/CD pipeline is also covered.
https://www.coguard.io/post/ci-cd-pipeline
Enjoy the read. And yes, one could've made different design choices, but we tried to keep it simple ;-)
https://redd.it/znirfs
@r_devops
Good CI/CD practices and proper containerization are at the core of the best DevOps teams. Mounting them later in the lifecycle of a project is time-consuming and expensive.
In order to give people an idea about the possibilities and how little the initial investment in terms of time is, we wrote an article containing a step-by-step guide to get started with a simple React application, containerized, and set up with docker-compose for local development. How to insert basic checks into the CI/CD pipeline is also covered.
https://www.coguard.io/post/ci-cd-pipeline
Enjoy the read. And yes, one could've made different design choices, but we tried to keep it simple ;-)
https://redd.it/znirfs
@r_devops
www.coguard.io
Setting up a project with a security-hardened CI/CD pipeline
This article demonstrates how to start building a react application, get it committed to GitHub and set up linting and code scanning. Then we set up and use GitHub Actions, Docker, Docker Compose, GitHub Secrets and CoGuard to build and secure the initial…
Would you consider a job with on-prem rather than public?
Had an interview today and the company is 100% on prem. I'm not too keen on this as I've got AWS and Azure experience and would feel it would be a waste to not use it and potentially hinder future job opportunities. As they say 'use it or your lose it'.
Would you make you reconsider if the company was on prem over public cloud? Why, why not?
https://redd.it/zniddk
@r_devops
Had an interview today and the company is 100% on prem. I'm not too keen on this as I've got AWS and Azure experience and would feel it would be a waste to not use it and potentially hinder future job opportunities. As they say 'use it or your lose it'.
Would you make you reconsider if the company was on prem over public cloud? Why, why not?
https://redd.it/zniddk
@r_devops
reddit
Would you consider a job with on-prem rather than public?
Had an interview today and the company is 100% on prem. I'm not too keen on this as I've got AWS and Azure experience and would feel it would be a...
React/Flask Minikube k8s pods are working, but not finding each other. How to fix?
I've got 2 pods for the front and back end, react and flask respectively that reside in the same namespace. Both are running individually functional pods, but the frontend is not finding its API. The way the project is configured currently is that react expects the API at pod's
Since React has a proxy field in the package.json I dont know what the best way to fill that would be, as well as making the fetch requests in my files. Perhaps an env variable but how would I populate it with the proper address?
I have messed around with ingresses but was wondering if there is a more straight forward way to do it as an ingress seems like more overhead than is necessary, and I'm not all that familiar with how they work, to be honest.
https://redd.it/zni8hz
@r_devops
I've got 2 pods for the front and back end, react and flask respectively that reside in the same namespace. Both are running individually functional pods, but the frontend is not finding its API. The way the project is configured currently is that react expects the API at pod's
localhost:3000 I am unsure how the best way to route it to the API service is.Since React has a proxy field in the package.json I dont know what the best way to fill that would be, as well as making the fetch requests in my files. Perhaps an env variable but how would I populate it with the proper address?
I have messed around with ingresses but was wondering if there is a more straight forward way to do it as an ingress seems like more overhead than is necessary, and I'm not all that familiar with how they work, to be honest.
https://redd.it/zni8hz
@r_devops
reddit
React/Flask Minikube k8s pods are working, but not finding each...
I've got 2 pods for the front and back end, react and flask respectively that reside in the same namespace. Both are running individually...
Moving from sysadmin to sre/devops. any certs or none at all?
I've been working as a SysAdmin for 2+ years since I graduated. My current company is pretty small, most of the things were on bare metal although now we've virtualised most of the things using vSphere, the infrastructure is solid but doesn't use most of the things I see job postings asking for nowadays (no K8s, no CI/CD, little things with Ansible set up before I got there, no Terraform and the only cloud services used are AWS S3/Glacier).
I started looking for SRE roles after I learned about the role from Google, since I love how they define it even if I know that not everywhere does it that way, but after seeing I lack most of the most demanded technologies, I enrolled to KodeKloud and plan to do the IaC path and the K8s one (which I started).
But I want to know how I can improve my chances in job interviews and how to make myself more attractive (maybe even avoiding starting from a completely bottom position too).
Right now I'm doing the course for CKAD on KodeKloud so I was interested in certs, but looking around and talking to people they mentioned mostly CKA and AWS SAA ones. Should I go for these if I'm looking to get a job similar to the SRE role that Google mentions? Should I just learn the technologies and forget about certs?
I feel like I have a good foundation with networking, scripting, databases and even programming as I've done that as a freelancer (mostly Java for mobile apps, though), all I'm missing in my CV is what I mentioned before imo. How do I increase my chances?
I'm from Europe if that matters.
https://redd.it/znkfz0
@r_devops
I've been working as a SysAdmin for 2+ years since I graduated. My current company is pretty small, most of the things were on bare metal although now we've virtualised most of the things using vSphere, the infrastructure is solid but doesn't use most of the things I see job postings asking for nowadays (no K8s, no CI/CD, little things with Ansible set up before I got there, no Terraform and the only cloud services used are AWS S3/Glacier).
I started looking for SRE roles after I learned about the role from Google, since I love how they define it even if I know that not everywhere does it that way, but after seeing I lack most of the most demanded technologies, I enrolled to KodeKloud and plan to do the IaC path and the K8s one (which I started).
But I want to know how I can improve my chances in job interviews and how to make myself more attractive (maybe even avoiding starting from a completely bottom position too).
Right now I'm doing the course for CKAD on KodeKloud so I was interested in certs, but looking around and talking to people they mentioned mostly CKA and AWS SAA ones. Should I go for these if I'm looking to get a job similar to the SRE role that Google mentions? Should I just learn the technologies and forget about certs?
I feel like I have a good foundation with networking, scripting, databases and even programming as I've done that as a freelancer (mostly Java for mobile apps, though), all I'm missing in my CV is what I mentioned before imo. How do I increase my chances?
I'm from Europe if that matters.
https://redd.it/znkfz0
@r_devops
reddit
Moving from sysadmin to sre/devops. any certs or none at all?
I've been working as a SysAdmin for 2+ years since I graduated. My current company is pretty small, most of the things were on bare metal although...
Need help
What exactly is DevOps? I am looking to grow my career and I am a union organizer.
I get that coding is basically looking for “bugs” and using some type of logic to solve the problem.
https://redd.it/znpl2z
@r_devops
What exactly is DevOps? I am looking to grow my career and I am a union organizer.
I get that coding is basically looking for “bugs” and using some type of logic to solve the problem.
https://redd.it/znpl2z
@r_devops
reddit
Need help
What exactly is DevOps? I am looking to grow my career and I am a union organizer. I get that coding is basically looking for “bugs” and using...
Web UI for Managing Files in a Kubernetes Volume
I have a need to give access to users to manage files in a volume. For authentication I would prefer to use Duo SSO. Is anyone aware of anyone projects I may be able to use to make this happen?
https://redd.it/zniz1q
@r_devops
I have a need to give access to users to manage files in a volume. For authentication I would prefer to use Duo SSO. Is anyone aware of anyone projects I may be able to use to make this happen?
https://redd.it/zniz1q
@r_devops
reddit
Web UI for Managing Files in a Kubernetes Volume
I have a need to give access to users to manage files in a volume. For authentication I would prefer to use Duo SSO. Is anyone aware of anyone...
Anyone have luck whitelisting Terraform Registry?
I have a box that runs terraform commands and is locked down pretty securely. Since this box will only do a few tasks, we want to whitelist the specific IPs/CIDRs that are required. Now, however, I am running into an issue during
With that, if I open up 0.0.0.0/0, it works just fine. So, it seems like a routing issue.
I am able to
Any ideas on how to proceed or troubleshoot? Or,
https://redd.it/znsbjz
@r_devops
I have a box that runs terraform commands and is locked down pretty securely. Since this box will only do a few tasks, we want to whitelist the specific IPs/CIDRs that are required. Now, however, I am running into an issue during
terraform init where I get the below error:Could not retrieve the list of available versions for provider hashicorp/aws: could not connect to `registry.terraform.io`: Failed to request discovery document: Get │ "`https://registry.terraform.io/.well-known/terraform.json`": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)With that, if I open up 0.0.0.0/0, it works just fine. So, it seems like a routing issue.
I am able to
dig to `registry.terraform.io` to get the IP but whitelisting this IP `146.75.38.49` does not work. I have tried to search for an IP or CIDR range supplied by Hashicorp for this but with no luck.Any ideas on how to proceed or troubleshoot? Or,
https://redd.it/znsbjz
@r_devops
GitHub Actions - How do you deal with the built-in cron scheduler being unreliable?
Hey everyone, I've built a small CICD pipeline that pushes code to AWS. I also want to execute that code through Github Actions, up until now the cron job that executes it ran on the EC2 instance itself. Now I have set up a workflow to run it from Github and it works, but it's pretty unreliable, it often executes ten minutes later and I read that it's not uncommon to be way worse:
https://upptime.js.org/blog/2021/01/22/github-actions-schedule-not-working/
One thing I saw Github recommend is just to "not run mission-critical jobs on it". But I find it very convenient to check right on Github if it has successfully executed my script. What's the easiest fix here?
https://redd.it/znndps
@r_devops
Hey everyone, I've built a small CICD pipeline that pushes code to AWS. I also want to execute that code through Github Actions, up until now the cron job that executes it ran on the EC2 instance itself. Now I have set up a workflow to run it from Github and it works, but it's pretty unreliable, it often executes ten minutes later and I read that it's not uncommon to be way worse:
https://upptime.js.org/blog/2021/01/22/github-actions-schedule-not-working/
One thing I saw Github recommend is just to "not run mission-critical jobs on it". But I find it very convenient to check right on Github if it has successfully executed my script. What's the easiest fix here?
https://redd.it/znndps
@r_devops
upptime.js.org
GitHub Actions workflow not triggering at scheduled time | Upptime
We've seen many reports of GitHub Actions workflows not triggering at the scheduled time. In fact, in the official upptime/upptime repository, workflows scheduled for every five minutes run as slower as once every hour. This blog post is a quick summary of…
Having trouble with home project - beamdog/nwserver on ECS Fargate
Hi all,
I'm trying to get the beamdog/nwserver:latest docker image off the ground with a completely self-contained image that I can deploy to ECR. I want to do this because it's a fun project, it'll help me learn Docker, and also help me learn ECS/ECR.
The problem is that the instructions on their website only show how to run it in docker locally via CLI, and I'm at kind of a loss as to how to write the entire Dockerfile so that ECS Fargate can understand it and successfully deploy the running server binary.
Current issues:
1. When I follow the exact directions on their site, it runs just fine. I think. I've not yet been able to actually log into the server via the game client using my loopback, but it says that it loads the module and runs it just fine so I'll take that with a couple grains of salt. I assume it's some WSLv2 weirdness, as I'm doing this on Windows in WSLv2.
2. It seems like when you run the container, it expects the local filesystem to have the 'server' folder, along with 'modules', 'hak', and 'tlk' inside that folder. Then your module file should go in the 'modules' folder. Does this mean I will need to figure out how to attach an EFS volume to the task which has these same files? I'm seriously struggling to figure out if I can put the .mod file directly inside the container somewhere where the binary can find it.
3. It suggests to use either environment variables or --env-file to pass in the variables. Can I also just do this inside the Dockerfile using 'ENV'? It doesn't seem to work when I try it that way. There are a few ways I can skin this pig, namely using environment variables in terraform for the task definition, or doing it at build time, etc. I think the more secure way would be at build time during the pipeline.
Can anybody help me figure this out? I've tried it about 5 different ways, but it seems like the official image from Beamdog is the best way to go. (I've tried to compile the binary myself on ubuntu, I've tried NWNX - which is way overkill and much more complex, I've tried centos images...)
My current file looks something like this, but again, I don't think this will work unless the ECS task is able to run it the same way I run it on my local CLI...which I just don't know enough about how to set that up in AWS.
FROM beamdog/nwserver:latest
ARG NWNPLAYERPASSWORD
ARG NWNDMPASSWORD
ARG NWNADMINPASSWORD
ENV NWNPORT=5121
ENV NWNMODULE="MyCoolModuleName"
ENV NWNSERVERNAME="MyCoolModuleName"
ENV NWNPUBLICSERVER=1
ENV NWNMAXCLIENTS=32
ENV NWNMINLEVEL=1
ENV NWNMAXLEVEL=40
ENV NWNPAUSEANDPLAY=0
ENV NWNPVP=1
ENV NWNSERVERVAULT=0
ENV NWNELC=1
ENV NWNILR=1
ENV NWNGAMETYPE=0
ENV NWNONEPARTY=0
ENV NWNDIFFICULTY=3
ENV NWNAUTOSAVEINTERVAL=0
ENV NWNRELOADWHENEMPTY=0
ENV NWNPLAYERPASSWORD=$NWNPLAYERPASSWORD
ENV NWNDMPASSWORD=$NWNDMPASSWORD
ENV NWNADMINPASSWORD=$NWNADMINPASSWORD
https://redd.it/znufw6
@r_devops
Hi all,
I'm trying to get the beamdog/nwserver:latest docker image off the ground with a completely self-contained image that I can deploy to ECR. I want to do this because it's a fun project, it'll help me learn Docker, and also help me learn ECS/ECR.
The problem is that the instructions on their website only show how to run it in docker locally via CLI, and I'm at kind of a loss as to how to write the entire Dockerfile so that ECS Fargate can understand it and successfully deploy the running server binary.
Current issues:
1. When I follow the exact directions on their site, it runs just fine. I think. I've not yet been able to actually log into the server via the game client using my loopback, but it says that it loads the module and runs it just fine so I'll take that with a couple grains of salt. I assume it's some WSLv2 weirdness, as I'm doing this on Windows in WSLv2.
2. It seems like when you run the container, it expects the local filesystem to have the 'server' folder, along with 'modules', 'hak', and 'tlk' inside that folder. Then your module file should go in the 'modules' folder. Does this mean I will need to figure out how to attach an EFS volume to the task which has these same files? I'm seriously struggling to figure out if I can put the .mod file directly inside the container somewhere where the binary can find it.
3. It suggests to use either environment variables or --env-file to pass in the variables. Can I also just do this inside the Dockerfile using 'ENV'? It doesn't seem to work when I try it that way. There are a few ways I can skin this pig, namely using environment variables in terraform for the task definition, or doing it at build time, etc. I think the more secure way would be at build time during the pipeline.
Can anybody help me figure this out? I've tried it about 5 different ways, but it seems like the official image from Beamdog is the best way to go. (I've tried to compile the binary myself on ubuntu, I've tried NWNX - which is way overkill and much more complex, I've tried centos images...)
My current file looks something like this, but again, I don't think this will work unless the ECS task is able to run it the same way I run it on my local CLI...which I just don't know enough about how to set that up in AWS.
FROM beamdog/nwserver:latest
ARG NWNPLAYERPASSWORD
ARG NWNDMPASSWORD
ARG NWNADMINPASSWORD
ENV NWNPORT=5121
ENV NWNMODULE="MyCoolModuleName"
ENV NWNSERVERNAME="MyCoolModuleName"
ENV NWNPUBLICSERVER=1
ENV NWNMAXCLIENTS=32
ENV NWNMINLEVEL=1
ENV NWNMAXLEVEL=40
ENV NWNPAUSEANDPLAY=0
ENV NWNPVP=1
ENV NWNSERVERVAULT=0
ENV NWNELC=1
ENV NWNILR=1
ENV NWNGAMETYPE=0
ENV NWNONEPARTY=0
ENV NWNDIFFICULTY=3
ENV NWNAUTOSAVEINTERVAL=0
ENV NWNRELOADWHENEMPTY=0
ENV NWNPLAYERPASSWORD=$NWNPLAYERPASSWORD
ENV NWNDMPASSWORD=$NWNDMPASSWORD
ENV NWNADMINPASSWORD=$NWNADMINPASSWORD
https://redd.it/znufw6
@r_devops
advice on career and personality
Hey everyone
I got my first traineeship few months ago but got pulled out of the team within the first month.
In the beginning I was told I don't have the knowledge which didn't make sense because it was a traineeship but today after a conversation I was told that the main reason was my attitude and that I was making the team unproductive.
In my life I have been told I am a connector and connect people together and that my presence and my energy in the room is big.
Sometimes without even doing anything.
In this team though I was never told anything, and in the opposite I felt good. Also I saw the team having the same energy without me.
I have had this issue all my life as a blessing and a curse but I am thinking if technical and devops is the right place for my personality
Any advice really appreciated 🙏
https://redd.it/zng6zi
@r_devops
Hey everyone
I got my first traineeship few months ago but got pulled out of the team within the first month.
In the beginning I was told I don't have the knowledge which didn't make sense because it was a traineeship but today after a conversation I was told that the main reason was my attitude and that I was making the team unproductive.
In my life I have been told I am a connector and connect people together and that my presence and my energy in the room is big.
Sometimes without even doing anything.
In this team though I was never told anything, and in the opposite I felt good. Also I saw the team having the same energy without me.
I have had this issue all my life as a blessing and a curse but I am thinking if technical and devops is the right place for my personality
Any advice really appreciated 🙏
https://redd.it/zng6zi
@r_devops
reddit
advice on career and personality
Hey everyone I got my first traineeship few months ago but got pulled out of the team within the first month. In the beginning I was told I don't...
Internal documentation thing
I would like to know how are you guys managing internal code documentation. API references, processes, instructions and so on. This area seems wild to me, i’ve seen lots of different things around, still not sure what is best. Also, who is defining documentation framework for you? Or maybe you define it for devs?
So far i worked with markdowns in git, confluence, gh wikis, gitlab, google docs, office 365, custom internal portals and random combinations of the above…
Do you know any approach which you think works for your case?
https://redd.it/znjqns
@r_devops
I would like to know how are you guys managing internal code documentation. API references, processes, instructions and so on. This area seems wild to me, i’ve seen lots of different things around, still not sure what is best. Also, who is defining documentation framework for you? Or maybe you define it for devs?
So far i worked with markdowns in git, confluence, gh wikis, gitlab, google docs, office 365, custom internal portals and random combinations of the above…
Do you know any approach which you think works for your case?
https://redd.it/znjqns
@r_devops
reddit
Internal documentation thing
I would like to know how are you guys managing internal code documentation. API references, processes, instructions and so on. This area seems...
group merge requests
When working with polyrepo architecture, a single change to the entire product often happens across multiple projects, and that means the teams have to coordinate multiple merge requests, which all run CI/CD processes that theoretically should be a single CI/CD pipeline, and a single merge request. Gitlab even recognizes this problem, as seen here:
[https://gitlab.com/groups/gitlab-org/-/epics/882](https://gitlab.com/groups/gitlab-org/-/epics/882)
https://gitlab.com/gitlab-org/gitlab/-/issues/3427
However, they claim that they found low demand when interacting with customers. My question is- how is that possible? As they said themselves, most people solve this problem by either using a monorepo (which has its own issues), ci patch-scripting (which is limited and requires a lot of work and decisions) or by using submodules (which might fit the bill for libraries, but not things that need to be deployed like microservices)
So I guess my question is, what do you guys do when you want to use a polyrepo architecture for developing microservices, and want to make a single change to your product that spans multiple repositories?
https://redd.it/zni83l
@r_devops
When working with polyrepo architecture, a single change to the entire product often happens across multiple projects, and that means the teams have to coordinate multiple merge requests, which all run CI/CD processes that theoretically should be a single CI/CD pipeline, and a single merge request. Gitlab even recognizes this problem, as seen here:
[https://gitlab.com/groups/gitlab-org/-/epics/882](https://gitlab.com/groups/gitlab-org/-/epics/882)
https://gitlab.com/gitlab-org/gitlab/-/issues/3427
However, they claim that they found low demand when interacting with customers. My question is- how is that possible? As they said themselves, most people solve this problem by either using a monorepo (which has its own issues), ci patch-scripting (which is limited and requires a lot of work and decisions) or by using submodules (which might fit the bill for libraries, but not things that need to be deployed like microservices)
So I guess my question is, what do you guys do when you want to use a polyrepo architecture for developing microservices, and want to make a single change to your product that spans multiple repositories?
https://redd.it/zni83l
@r_devops
GitLab
Polyrepo Workflows (#882) · Epics · Epics · GitLab.org · GitLab
This page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so...