How to perform CI/CD in mobile development (andorid/apple)
Hello,
I am quite familiar with DevOps pipeline ad CI/CD for backend systems... but I am getting quite confused on how will that work on a mobile development...
here is my setting:
backend of the Mobile app has three environments (Terraform and Ansible powered)
Development (in one machine) where the developer "deploy locally" the backend (containerized) and performed changes and unit tests
Staging (in AWS), where the mobile developers connect the APIs and perform mobile testing
Production (in AWS), where the "live mobile app" is connected
At the moment the mobile app (frontend) is developed in a silos and there is not really anything in place in terms of pipeline, CI/CD etc...
Initially I thought to use the built in feature of the stores (Apple and Google):
Apple TestFlight
Google Alfa/Beta Channels
But the challenge is that once I publish the app (either in Apple or Google) in the "beta test mode" (TestFlight and/or Alfa/Beta Channels) I cannot point the app to a "staging/Test" environment, but it will be in production...
Is there something am I missing? how do you have beta testers on mobile front end? (in test environment and not production....
maybe what is not clear to me is the overall Pipeline Ci/CD for the mobile front end... (the part that will be ultimately uploaded into the store)...
Than you all!
https://redd.it/nqjbse
@r_devops
Hello,
I am quite familiar with DevOps pipeline ad CI/CD for backend systems... but I am getting quite confused on how will that work on a mobile development...
here is my setting:
backend of the Mobile app has three environments (Terraform and Ansible powered)
Development (in one machine) where the developer "deploy locally" the backend (containerized) and performed changes and unit tests
Staging (in AWS), where the mobile developers connect the APIs and perform mobile testing
Production (in AWS), where the "live mobile app" is connected
At the moment the mobile app (frontend) is developed in a silos and there is not really anything in place in terms of pipeline, CI/CD etc...
Initially I thought to use the built in feature of the stores (Apple and Google):
Apple TestFlight
Google Alfa/Beta Channels
But the challenge is that once I publish the app (either in Apple or Google) in the "beta test mode" (TestFlight and/or Alfa/Beta Channels) I cannot point the app to a "staging/Test" environment, but it will be in production...
Is there something am I missing? how do you have beta testers on mobile front end? (in test environment and not production....
maybe what is not clear to me is the overall Pipeline Ci/CD for the mobile front end... (the part that will be ultimately uploaded into the store)...
Than you all!
https://redd.it/nqjbse
@r_devops
reddit
How to perform CI/CD in mobile development (andorid/apple)
Hello, I am quite familiar with DevOps pipeline ad CI/CD for backend systems... but I am getting quite confused on how will that work on a mobile...
Whole picture vs split by environments
In most cases I prefer complete separation of environments. DBs, apis, Kafka streams should all be the same across environments which afford the benefits of proper CI, CD.
When you start to look at the whole picture sometimes its helpful to have consolidates views. One good example is consolidated dashboards with git lab https://dashboards.gitlab.com/
This can be true for other cases like logs, tracing, saml Auth, and third party integration.
What is a good mental model and splitting point for where you see something and think it needs replicas for different environments vs a consolidated view?
https://redd.it/nqj41f
@r_devops
In most cases I prefer complete separation of environments. DBs, apis, Kafka streams should all be the same across environments which afford the benefits of proper CI, CD.
When you start to look at the whole picture sometimes its helpful to have consolidates views. One good example is consolidated dashboards with git lab https://dashboards.gitlab.com/
This can be true for other cases like logs, tracing, saml Auth, and third party integration.
What is a good mental model and splitting point for where you see something and think it needs replicas for different environments vs a consolidated view?
https://redd.it/nqj41f
@r_devops
Practical kubernetes projects
Does anyone know or have a practical kubernetes project
I would like learn stuffs by doing it
Is there any guide/book/course which can help me
Good with kubernetes basics
https://redd.it/nqiybo
@r_devops
Does anyone know or have a practical kubernetes project
I would like learn stuffs by doing it
Is there any guide/book/course which can help me
Good with kubernetes basics
https://redd.it/nqiybo
@r_devops
reddit
Practical kubernetes projects
Does anyone know or have a practical kubernetes project I would like learn stuffs by doing it Is there any guide/book/course which can help me...
Seeking advice on which cloud services to use with my project (SPA w/ AWS potentially)
Hi all,
Forgive me if this post is inappropriate for this sub - I am looking for some guidance. I am currently developing a Single page application that will be based on an AWS multi-tenant model (i.e single primary database for PII/Users table , separate RDS instance generated for each client's data set - no PII, but important)
The application requires users to answer a series of questions, where the answers will be reported on in an admin panel within the app. All clients will access the same EC2 instance through a subdomain (www.mycomapny.[client-name\].com/app) and ideally will pair to their allocated RDS instance on arrival. The EC2 instance contains my app (Nuxt w/ Laravel API), and for now my primary DB with user names, tenant ID's etc
Additonally, each client will have their own homepage (I would image tis repository would sit on S3 - (www.mycomapny.[client-name\].com)) which could be updated through a 'config' page within the application admin panel, triggering some sort of continuous deployment process and update to the home page.
Soooo...I was hoping you guys could kindly offer some advice:
\- Is what I am describing achievable within AWS? Are there better/more achievable ways of doing this? Open to suggestions outside of AWS
\- Could you guys recommend a way to dynamically set the correct tenant DB when they arrive on the page from a specific URL?
\- Security and meeting privacy standards are crucial. Is there anything in particular I should be doing or keep in mind?
\- Expenses may become and issue. Might this set up be expensive? Can you recommend a good way of calculating costs, ballpark?
Really hope this is clear, apologies if this is too vague - a lot of this is new to me. Please don't eat me reddit!
Any guidance (even just resources/other subs/etc) would be very much appreciated.
Thanks
https://redd.it/nqgi0i
@r_devops
Hi all,
Forgive me if this post is inappropriate for this sub - I am looking for some guidance. I am currently developing a Single page application that will be based on an AWS multi-tenant model (i.e single primary database for PII/Users table , separate RDS instance generated for each client's data set - no PII, but important)
The application requires users to answer a series of questions, where the answers will be reported on in an admin panel within the app. All clients will access the same EC2 instance through a subdomain (www.mycomapny.[client-name\].com/app) and ideally will pair to their allocated RDS instance on arrival. The EC2 instance contains my app (Nuxt w/ Laravel API), and for now my primary DB with user names, tenant ID's etc
Additonally, each client will have their own homepage (I would image tis repository would sit on S3 - (www.mycomapny.[client-name\].com)) which could be updated through a 'config' page within the application admin panel, triggering some sort of continuous deployment process and update to the home page.
Soooo...I was hoping you guys could kindly offer some advice:
\- Is what I am describing achievable within AWS? Are there better/more achievable ways of doing this? Open to suggestions outside of AWS
\- Could you guys recommend a way to dynamically set the correct tenant DB when they arrive on the page from a specific URL?
\- Security and meeting privacy standards are crucial. Is there anything in particular I should be doing or keep in mind?
\- Expenses may become and issue. Might this set up be expensive? Can you recommend a good way of calculating costs, ballpark?
Really hope this is clear, apologies if this is too vague - a lot of this is new to me. Please don't eat me reddit!
Any guidance (even just resources/other subs/etc) would be very much appreciated.
Thanks
https://redd.it/nqgi0i
@r_devops
reddit
Seeking advice on which cloud services to use with my project (SPA...
Hi all, Forgive me if this post is inappropriate for this sub - I am looking for some guidance. I am currently developing a Single page...
Send VM's system information to a Webhook
Hello, I have created a bionic Ubuntu virtual machine using KVM and virt-install. How can I send its information to a webhook when it boots for the first time? How can I create a webhook? I'd appreciate it if anyone could explain it to me.
https://redd.it/nqfnic
@r_devops
Hello, I have created a bionic Ubuntu virtual machine using KVM and virt-install. How can I send its information to a webhook when it boots for the first time? How can I create a webhook? I'd appreciate it if anyone could explain it to me.
https://redd.it/nqfnic
@r_devops
reddit
Send VM's system information to a Webhook
Hello, I have created a bionic Ubuntu virtual machine using KVM and virt-install. How can I send its information to a webhook when it boots for...
How are you handling package/Image security?
Hey r/devops. . . haven't posted one of these in a while. . .I need to lock down, monitor and clean up my teams' usage of
* language packages (Python, Go, Javascript)
* Dockerfile dependencies - yum/apt packages and other misc non-language dependencies
* A new one - GH Action dependencies i.e. Actions that my teams are using in their CI/CD pipelines. The rub here is the we all LOVE GH Actions but we're also using a bunch of random actions which is horrible from a security standpoint.
We're using GitHub and GitHub Actions, ECR for images and no surprise, a bunch of open source libraries.
I need a sane way to alert on unapproved libraries/packages/actions or at least *new* usages of the above and ideally also enforce the usage of known code/tools. There are lots of different tools to use here and we're already using dependabot, native ECR scanning, various linters (e.g. golangci-lint).
Just looking for ideas and recommendations. . . I'm considering something like Artifactory and/or AWS Code Artifact to pin and control (and cache) external dependencies. Also, contemplating vendoring our Go code. I'm not even thinking about licensing scans at this point but that's something we'll probably need too (e.g. BlackDuck).
tl;dr how do you secure and manage your external dependencies???
https://redd.it/nqakxa
@r_devops
Hey r/devops. . . haven't posted one of these in a while. . .I need to lock down, monitor and clean up my teams' usage of
* language packages (Python, Go, Javascript)
* Dockerfile dependencies - yum/apt packages and other misc non-language dependencies
* A new one - GH Action dependencies i.e. Actions that my teams are using in their CI/CD pipelines. The rub here is the we all LOVE GH Actions but we're also using a bunch of random actions which is horrible from a security standpoint.
We're using GitHub and GitHub Actions, ECR for images and no surprise, a bunch of open source libraries.
I need a sane way to alert on unapproved libraries/packages/actions or at least *new* usages of the above and ideally also enforce the usage of known code/tools. There are lots of different tools to use here and we're already using dependabot, native ECR scanning, various linters (e.g. golangci-lint).
Just looking for ideas and recommendations. . . I'm considering something like Artifactory and/or AWS Code Artifact to pin and control (and cache) external dependencies. Also, contemplating vendoring our Go code. I'm not even thinking about licensing scans at this point but that's something we'll probably need too (e.g. BlackDuck).
tl;dr how do you secure and manage your external dependencies???
https://redd.it/nqakxa
@r_devops
reddit
How are you handling package/Image security?
Hey r/devops. . . haven't posted one of these in a while. . .I need to lock down, monitor and clean up my teams' usage of * language packages...
Best logging solution for startups
What’s a good paid logging management solution for a small startup. Logs are mostly for our api and worker clusters that will be used for troubleshooting errors. We don’t have resources to build our own stack and Cloudwatch’s UI just doesn’t seem to cut it.
https://redd.it/nq9pqk
@r_devops
What’s a good paid logging management solution for a small startup. Logs are mostly for our api and worker clusters that will be used for troubleshooting errors. We don’t have resources to build our own stack and Cloudwatch’s UI just doesn’t seem to cut it.
https://redd.it/nq9pqk
@r_devops
reddit
r/devops - Best logging solution for startups
2 votes and 3 comments so far on Reddit
Supported options for Docker Swarm persistent storage?
Preface: Yes I know about Kubernetes and no, at this point in time, it's not a feasible solution for this use case.
Does anyone know of an actively maintained persistent storage driver for Docker Swarm? My google-fu has revealed a ton of (understandably) dead projects that were consumed by kubernetes. I suspect this to be a dead-end search in 2021, but I figured I would reach out in case anyone is still running swarm and can share how their handling persistent storage.
For reference, I'm working within a VMWare vSphere environment (which, unfortunately, seems to no longer maintain their Docker-specific driver).
https://redd.it/nqaz7e
@r_devops
Preface: Yes I know about Kubernetes and no, at this point in time, it's not a feasible solution for this use case.
Does anyone know of an actively maintained persistent storage driver for Docker Swarm? My google-fu has revealed a ton of (understandably) dead projects that were consumed by kubernetes. I suspect this to be a dead-end search in 2021, but I figured I would reach out in case anyone is still running swarm and can share how their handling persistent storage.
For reference, I'm working within a VMWare vSphere environment (which, unfortunately, seems to no longer maintain their Docker-specific driver).
https://redd.it/nqaz7e
@r_devops
reddit
Supported options for Docker Swarm persistent storage?
**Preface:** Yes I know about Kubernetes and no, at this point in time, it's not a feasible solution for this use case. Does anyone know of an...
What are you making with Go? CLI's? REST APIs?
Looking for inspiration. Maybe thinking some simple file serving via Json, but not that handy with structs yet. Then love the idea of not needing to publish requirements.txt and interpreter everywhere.
Good simple projects? Plan is to write more go this year.
https://redd.it/nrpp3t
@r_devops
Looking for inspiration. Maybe thinking some simple file serving via Json, but not that handy with structs yet. Then love the idea of not needing to publish requirements.txt and interpreter everywhere.
Good simple projects? Plan is to write more go this year.
https://redd.it/nrpp3t
@r_devops
reddit
What are you making with Go? CLI's? REST APIs?
Looking for inspiration. Maybe thinking some simple file serving via Json, but not that handy with structs yet. Then love the idea of not needing...
Are there good options for researching Splunk Use?
I’ve worked at a company that hasn’t really made DevOps a priority since it’s inception, mostly having AWS do all the heavy lifting. Now we’re trying to go through the base steps of getting some observability in the app through the use of Prometheus for metrics and Splunk for logging.
Of course, Splunk is a huge app that does a ton more than logging, so if we’re going to get set up with a subscription I want to make sure we’re using tools of theirs that fill all our holes if it makes sense. My problem is I’m having a very hard time looking into it what each individual section of the app does. Does anyone know of good resources to look into this? Their website isn’t super detailed and I feel I need those details when considering things like RUM and APM, I’ll do all the reading or listening I need to do, just need to find good resources.
https://redd.it/nrsaqz
@r_devops
I’ve worked at a company that hasn’t really made DevOps a priority since it’s inception, mostly having AWS do all the heavy lifting. Now we’re trying to go through the base steps of getting some observability in the app through the use of Prometheus for metrics and Splunk for logging.
Of course, Splunk is a huge app that does a ton more than logging, so if we’re going to get set up with a subscription I want to make sure we’re using tools of theirs that fill all our holes if it makes sense. My problem is I’m having a very hard time looking into it what each individual section of the app does. Does anyone know of good resources to look into this? Their website isn’t super detailed and I feel I need those details when considering things like RUM and APM, I’ll do all the reading or listening I need to do, just need to find good resources.
https://redd.it/nrsaqz
@r_devops
reddit
Are there good options for researching Splunk Use?
I’ve worked at a company that hasn’t really made DevOps a priority since it’s inception, mostly having AWS do all the heavy lifting. Now we’re...
Simple kubernetes for staging/test server?
I’m new to devops. Rather than going full-blown k8, I was wondering whether there is something that can help quickly setup and deploy containerized apps into a server. Docker-compose maybe? For production it makes sense for me to go k8.
https://redd.it/nrjv78
@r_devops
I’m new to devops. Rather than going full-blown k8, I was wondering whether there is something that can help quickly setup and deploy containerized apps into a server. Docker-compose maybe? For production it makes sense for me to go k8.
https://redd.it/nrjv78
@r_devops
reddit
Simple kubernetes for staging/test server?
I’m new to devops. Rather than going full-blown k8, I was wondering whether there is something that can help quickly setup and deploy...
Move from US to EUrope but Work remotely for US company?
Hello all,
Looking for some advice. I'm a linux engineer who operates as a standard devops/sre in most orgs with over a decade of experience. Currently born and raised as a US citizen. Due to some family reasons, we're considering moving to my spouse's hometown in a European country and spending time close to the family there.
Prior to Covid I would have not even considered this as reasonable option, but now that everyone is working remotely more and demanding that, I suspect it's likely a more likely option now. I know most US companies won't hire an employee remotely except sometimes as 1099 due to the tax implications / legal hiring implications often associated with countries, but I know larger corporations are more open to this.
My question is... does anyone know where would one would start? All of my contacts / search sites / recruiters are in the US and operate solely there. I know it's been done, and it's rare, but ... Not really sure where to begin. :)
Thanks for the advice in advance.
https://redd.it/nrmhrl
@r_devops
Hello all,
Looking for some advice. I'm a linux engineer who operates as a standard devops/sre in most orgs with over a decade of experience. Currently born and raised as a US citizen. Due to some family reasons, we're considering moving to my spouse's hometown in a European country and spending time close to the family there.
Prior to Covid I would have not even considered this as reasonable option, but now that everyone is working remotely more and demanding that, I suspect it's likely a more likely option now. I know most US companies won't hire an employee remotely except sometimes as 1099 due to the tax implications / legal hiring implications often associated with countries, but I know larger corporations are more open to this.
My question is... does anyone know where would one would start? All of my contacts / search sites / recruiters are in the US and operate solely there. I know it's been done, and it's rare, but ... Not really sure where to begin. :)
Thanks for the advice in advance.
https://redd.it/nrmhrl
@r_devops
reddit
Move from US to EUrope but Work remotely for US company?
Hello all, Looking for some advice. I'm a linux engineer who operates as a standard devops/sre in most orgs with over a decade of experience....
Writing QCOW2 image to disk?
Does anyone have a way to write a disk image to an actual, physical disk? What I'm trying to do is have a Packer image that is applied to bare metal machines. Right now, what I do is make the disk image a block device with qemu-nbd. I then dd that block device (/dev/nbd0) to the disk (/dev/sda). However, I'm writing all the zeros in the disk as well, which is extremely inefficient and defeats the entire purpose. Does anyone have tools that they use for this or am I off into uncharted territories?
https://redd.it/nrkmqy
@r_devops
Does anyone have a way to write a disk image to an actual, physical disk? What I'm trying to do is have a Packer image that is applied to bare metal machines. Right now, what I do is make the disk image a block device with qemu-nbd. I then dd that block device (/dev/nbd0) to the disk (/dev/sda). However, I'm writing all the zeros in the disk as well, which is extremely inefficient and defeats the entire purpose. Does anyone have tools that they use for this or am I off into uncharted territories?
https://redd.it/nrkmqy
@r_devops
reddit
Writing QCOW2 image to disk?
Does anyone have a way to write a disk image to an actual, physical disk? What I'm trying to do is have a Packer image that is applied to bare...
Are cdns becoming less relevant with the service worker/cache api?
Or is that initial load, and update of the js bundle still critical to be delivered fast. Sorry if these sorts of posts aren't alowed.
https://redd.it/nr9xjf
@r_devops
Or is that initial load, and update of the js bundle still critical to be delivered fast. Sorry if these sorts of posts aren't alowed.
https://redd.it/nr9xjf
@r_devops
reddit
Are cdns becoming less relevant with the service worker/cache api?
Or is that initial load, and update of the js bundle still critical to be delivered fast. Sorry if these sorts of posts aren't alowed.
Upcoming First AWS Interview
Hi all,
I am currently network engineer and planning to transition to cloud I already passed the SAA02 exam and now I have upcoming interview as AWS Sysops Engineer. Can you tell me the possible questions and tips. thanks
https://redd.it/nrwecg
@r_devops
Hi all,
I am currently network engineer and planning to transition to cloud I already passed the SAA02 exam and now I have upcoming interview as AWS Sysops Engineer. Can you tell me the possible questions and tips. thanks
https://redd.it/nrwecg
@r_devops
reddit
Upcoming First AWS Interview
Hi all, I am currently network engineer and planning to transition to cloud I already passed the SAA02 exam and now I have upcoming interview as...
What is the best ingress controller for RabbitMQ?
Hi, I am running different versions of our application in different namespaces on one Kubernetes cluster. Each namespace also contains one RabbitMQ and I am looking for a good ingress controller/reverse proxy that can forward the traffic to each RabbitMQ, this way I don't need to create multiple public services for these RMQs and I will just create one public service for the ingress controller.
https://redd.it/nr8y6l
@r_devops
Hi, I am running different versions of our application in different namespaces on one Kubernetes cluster. Each namespace also contains one RabbitMQ and I am looking for a good ingress controller/reverse proxy that can forward the traffic to each RabbitMQ, this way I don't need to create multiple public services for these RMQs and I will just create one public service for the ingress controller.
https://redd.it/nr8y6l
@r_devops
reddit
What is the best ingress controller for RabbitMQ?
Hi, I am running different versions of our application in different namespaces on one Kubernetes cluster. Each namespace also contains one...
How to implement one-click rollback in GitLab ci/cd?
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it.
I want to achieve a one-click rollback in the web-ui.
My current mock up deployment pipeline is set up as per below.
(the real ci file is quite long)
stages:
- build
- pre-deploy
- deploy
docker_build:
stage: build
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
environment:
name: dev
tags:
- builder_01
pre_deploy_server_1:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-1:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-1 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
pre_deploy_server_2:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-2:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-2 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
deploy:
stage: deploy
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker stack deploy --compose-file docker-compose.yaml stack-name
environment:
name: dev
tags:
- docker_swarm_manager
The deploy stage and build/pre-deploy needs to be executed in separate runners, and using tags for this.
In my real ci file, the rsync task is executed to 10 severs, with a lot of additional commands not listed here.
I separated each of the rsync jobs as separate jobs to get the granularity in the UI to see exactly which node a deployment failed on.
In a rollback scenario with the current setup, I need to:
* Go to the Operations -> Environments section in GitLab
* Enter the "dev" environment
* Click the rollback button for **each** of the defined jobs as per the above ci file
I'm trying to achieve a one-click rollback solution, and I'm having a hard time understanding how I **should** structure the config to achieve this. Am I trying to implement something that is not possible?
Any advice or pointers is appreciated!
https://redd.it/nr8o91
@r_devops
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it.
I want to achieve a one-click rollback in the web-ui.
My current mock up deployment pipeline is set up as per below.
(the real ci file is quite long)
stages:
- build
- pre-deploy
- deploy
docker_build:
stage: build
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
environment:
name: dev
tags:
- builder_01
pre_deploy_server_1:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-1:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-1 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
pre_deploy_server_2:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-2:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-2 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
deploy:
stage: deploy
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker stack deploy --compose-file docker-compose.yaml stack-name
environment:
name: dev
tags:
- docker_swarm_manager
The deploy stage and build/pre-deploy needs to be executed in separate runners, and using tags for this.
In my real ci file, the rsync task is executed to 10 severs, with a lot of additional commands not listed here.
I separated each of the rsync jobs as separate jobs to get the granularity in the UI to see exactly which node a deployment failed on.
In a rollback scenario with the current setup, I need to:
* Go to the Operations -> Environments section in GitLab
* Enter the "dev" environment
* Click the rollback button for **each** of the defined jobs as per the above ci file
I'm trying to achieve a one-click rollback solution, and I'm having a hard time understanding how I **should** structure the config to achieve this. Am I trying to implement something that is not possible?
Any advice or pointers is appreciated!
https://redd.it/nr8o91
@r_devops
reddit
How to implement one-click rollback in GitLab ci/cd?
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it. I want...
Which CloudFormation stacks are managed by a CodePipeline - script
At work, we needed to know which CloudFormation stacks were deployed by a given CodePipeline. There would be no such question if we did properly tag each stack (which we should have). If you are like us and you didn't, here is a script which shows the stacks managed by the given pipeline.
​
https://github.com/ngs-lang/nsd/blob/master/aws/codepipeline/pipeline-stacks.ngs
​
Hope this helps. Have fun!
https://redd.it/nr8mkd
@r_devops
At work, we needed to know which CloudFormation stacks were deployed by a given CodePipeline. There would be no such question if we did properly tag each stack (which we should have). If you are like us and you didn't, here is a script which shows the stacks managed by the given pipeline.
​
https://github.com/ngs-lang/nsd/blob/master/aws/codepipeline/pipeline-stacks.ngs
​
Hope this helps. Have fun!
https://redd.it/nr8mkd
@r_devops
GitHub
ngs-lang/nsd
NGS Scripts Dumpster. Contribute to ngs-lang/nsd development by creating an account on GitHub.
Tools to provision and manage Public and Private Cloud.
We are a private cloud solutions company, and we are moving from only private cloud to hybrid + edge cloud, we use OpenStack as our private cloud solution, the problem is I am not able to find the perfect tool that manages and provisions Openstack and public clouds like AWS, GCP, and Azure. Terraform is what comes the closest to it, but the problem with terraform is that it's mostly a CLI tool, I am looking for something API-based or service-based. We don't mind using multiple tools if that gets the work done.
https://redd.it/nr7vhm
@r_devops
We are a private cloud solutions company, and we are moving from only private cloud to hybrid + edge cloud, we use OpenStack as our private cloud solution, the problem is I am not able to find the perfect tool that manages and provisions Openstack and public clouds like AWS, GCP, and Azure. Terraform is what comes the closest to it, but the problem with terraform is that it's mostly a CLI tool, I am looking for something API-based or service-based. We don't mind using multiple tools if that gets the work done.
https://redd.it/nr7vhm
@r_devops
reddit
Tools to provision and manage Public and Private Cloud.
We are a private cloud solutions company, and we are moving from only private cloud to hybrid + edge cloud, we use OpenStack as our private cloud...
What's the most convenient order in which to install Consul, Nomad, and Vault
I'm trying to set up a simple 3+-machine Vault, Consul and Nomad DC:
- Machine 1: Vault-server, Consul-server, Nomad-server
- Machine 2: Consul-client, Nomad-client
- Machine 3: Consul-client, Nomad-client
What is the most convenient order to set up these services?
Consul first, then Nomad, then Vault;
or Vault, Consul, Nomad; or Consul, Vault, Nomad?
I could have Vault running in a container, managed by Nomad, or I could use Vault to provide the certificates needed to set up mTLS with Consul.
If you have any tips or tricks, feel free to share.
https://redd.it/nr7oka
@r_devops
I'm trying to set up a simple 3+-machine Vault, Consul and Nomad DC:
- Machine 1: Vault-server, Consul-server, Nomad-server
- Machine 2: Consul-client, Nomad-client
- Machine 3: Consul-client, Nomad-client
What is the most convenient order to set up these services?
Consul first, then Nomad, then Vault;
or Vault, Consul, Nomad; or Consul, Vault, Nomad?
I could have Vault running in a container, managed by Nomad, or I could use Vault to provide the certificates needed to set up mTLS with Consul.
If you have any tips or tricks, feel free to share.
https://redd.it/nr7oka
@r_devops
reddit
What's the most convenient order in which to install Consul,...
I'm trying to set up a simple 3+-machine Vault, Consul and Nomad DC: - Machine 1: Vault-server, Consul-server, Nomad-server - Machine 2:...
Need advice on better designing a basic lamp workflow among multiple machines.
Hey there!
I'm struggling in trying to make this understandable.
I'm currently a one person show, developing with LAMP. I'm having trouble designing an efficient workflow when I decide to develop on a desktop alongside my macbook (air, 2014. pretty old. works well though.). I'd love other peoples input into their workflows of similar goals. I feel like I'm making this harder than it should be, but I'm at the level where I'm not sure what to google next.
I currently have an apache web server installed locally on my macbook, using the homebrew tool. Using the brew tool, I also install php etc. I initially write my files into a separate, project folder then push this to my local apache webserver for testing.
Now say I want to work on this project on my desktop. Right now I just have dropbox watching my source files, so they're readily available on both desktop and macbook.
On my desktop: I use vagrant to spin up a vanilla Ubuntu (16.04+) and install an lamp web server on that vm. and push my source files onto that server, as with my macbook workflow.
Problems;
When I develop on my desktop vagrant VM, the apache config works a little differently in both environments. I just don't feel confident with the fact that installing a webserver is done differently on both platforms, differing dependencies / other things I probably don't even know about.
I can't just run vagrant on my mac because of resource usage, battery life etc.
Between the latency connecting to central remote development server, and the fact I sometimes cannot afford to pay for a VPS, these rule out using digital ocean et al. as a development environment.
Having to push all my code to a local webserver every iteration for testing seems annoying. Is this just part of it? Should I set up some bash scripts to automate this file upload? AHHH
I'm not at the level where I absolutely need consistency between both platforms, but it's bothering me and I'm wondering how others approach it.
I would like to have a workflow that offers consistent develop environment across all platforms. It's easier if it's just front-end.
Thanks loads if you got through that!
https://redd.it/nr73jw
@r_devops
Hey there!
I'm struggling in trying to make this understandable.
I'm currently a one person show, developing with LAMP. I'm having trouble designing an efficient workflow when I decide to develop on a desktop alongside my macbook (air, 2014. pretty old. works well though.). I'd love other peoples input into their workflows of similar goals. I feel like I'm making this harder than it should be, but I'm at the level where I'm not sure what to google next.
I currently have an apache web server installed locally on my macbook, using the homebrew tool. Using the brew tool, I also install php etc. I initially write my files into a separate, project folder then push this to my local apache webserver for testing.
Now say I want to work on this project on my desktop. Right now I just have dropbox watching my source files, so they're readily available on both desktop and macbook.
On my desktop: I use vagrant to spin up a vanilla Ubuntu (16.04+) and install an lamp web server on that vm. and push my source files onto that server, as with my macbook workflow.
Problems;
When I develop on my desktop vagrant VM, the apache config works a little differently in both environments. I just don't feel confident with the fact that installing a webserver is done differently on both platforms, differing dependencies / other things I probably don't even know about.
I can't just run vagrant on my mac because of resource usage, battery life etc.
Between the latency connecting to central remote development server, and the fact I sometimes cannot afford to pay for a VPS, these rule out using digital ocean et al. as a development environment.
Having to push all my code to a local webserver every iteration for testing seems annoying. Is this just part of it? Should I set up some bash scripts to automate this file upload? AHHH
I'm not at the level where I absolutely need consistency between both platforms, but it's bothering me and I'm wondering how others approach it.
I would like to have a workflow that offers consistent develop environment across all platforms. It's easier if it's just front-end.
Thanks loads if you got through that!
https://redd.it/nr73jw
@r_devops
reddit
Need advice on better designing a basic lamp workflow among...
Hey there! I'm struggling in trying to make this understandable. I'm currently a one person show, developing with LAMP. I'm having trouble...