Terraform - glorified documentation?
Hey,
I’ve been scratching my head over this - is Terraform really just a glorified documentation in most cases - in Kubernetes world? I use Terraform to just define networks, few VMs and a cluster. It doesn’t really fit in a CI/CD pipeline - there is a kubernetes provider but as far as I can tell it would be a pain to use it in continuous deployment.
What makes Terraform a „must have”? I could see the benefit when it comes to disaster recovery as you could deploy your whole infra with just a few commands but other than that how does it make your work quicker and more efficient?
Does anybody run terraform in their ci/cd pipelines? It would make sense if you had an application which is not dockerized and you would need your vms to look exactly as specified (so, packer + terraform). But in the container world?
I define my resources and apply everything manually and I have a weird feeling that I am missing something obvious.
https://redd.it/p39dlf
@r_devops
Hey,
I’ve been scratching my head over this - is Terraform really just a glorified documentation in most cases - in Kubernetes world? I use Terraform to just define networks, few VMs and a cluster. It doesn’t really fit in a CI/CD pipeline - there is a kubernetes provider but as far as I can tell it would be a pain to use it in continuous deployment.
What makes Terraform a „must have”? I could see the benefit when it comes to disaster recovery as you could deploy your whole infra with just a few commands but other than that how does it make your work quicker and more efficient?
Does anybody run terraform in their ci/cd pipelines? It would make sense if you had an application which is not dockerized and you would need your vms to look exactly as specified (so, packer + terraform). But in the container world?
I define my resources and apply everything manually and I have a weird feeling that I am missing something obvious.
https://redd.it/p39dlf
@r_devops
reddit
Terraform - glorified documentation?
Hey, I’ve been scratching my head over this - is Terraform really just a glorified documentation in most cases - in Kubernetes world? I use...
Weekly newsletter recommendations
Do you know about a cool Devops weekly newsletter that brings all the latest news in the world to your inbox?
https://redd.it/p3h6ge
@r_devops
Do you know about a cool Devops weekly newsletter that brings all the latest news in the world to your inbox?
https://redd.it/p3h6ge
@r_devops
reddit
Weekly newsletter recommendations
Do you know about a cool Devops weekly newsletter that brings all the latest news in the world to your inbox?
How do you manage alert creation from customer communication tools like Intercom?
Hey, folks. I run a small but growing startup and am working on improving our team's processes.
We've been looking at OpsGenie, PagerDuty, and Splunk Oncall for actioning on alerts/incident management. Today, we use Intercom for managing support conversations with our customers and Github issues for project management, feature requests, and bug tracking.
I'd like us to be able to create alerts directly from Intercom and have also ideally view alert status updates in Intercom as they get updated.
None of the three vendors I looked at seem to have direct integration with Intercom today. However, Intercom does have bi-directional integration with Github issues. OpsGenie supposedly has integration with Github issues as well, but I can't seem to get it to work at all.
​
What are other people doing here? Curious to hear about any success stories others may have had.
https://redd.it/p38ksw
@r_devops
Hey, folks. I run a small but growing startup and am working on improving our team's processes.
We've been looking at OpsGenie, PagerDuty, and Splunk Oncall for actioning on alerts/incident management. Today, we use Intercom for managing support conversations with our customers and Github issues for project management, feature requests, and bug tracking.
I'd like us to be able to create alerts directly from Intercom and have also ideally view alert status updates in Intercom as they get updated.
None of the three vendors I looked at seem to have direct integration with Intercom today. However, Intercom does have bi-directional integration with Github issues. OpsGenie supposedly has integration with Github issues as well, but I can't seem to get it to work at all.
​
What are other people doing here? Curious to hear about any success stories others may have had.
https://redd.it/p38ksw
@r_devops
reddit
How do you manage alert creation from customer communication tools...
Hey, folks. I run a small but growing startup and am working on improving our team's processes. We've been looking at OpsGenie, PagerDuty, and...
Devops Job requirements
Hi folks,
Hope you are all doing good. So a little something about me. I just graduated last year July 2020 and am currently working as a Devops Engineer in one of the Service Based companies and am looking for a switch.
I have AWS Cloud Practitioner Certification and currently doing some hand-on in GCP, and also learnt a bit of Terraform in the way. What more skills will I be needing to applying for a job interview.
Can you please share some pointers which I should be focusing on, I am planning to apply actively next year but until then am planning to prepare myself.
Thanks in advance :)
https://redd.it/p36bo6
@r_devops
Hi folks,
Hope you are all doing good. So a little something about me. I just graduated last year July 2020 and am currently working as a Devops Engineer in one of the Service Based companies and am looking for a switch.
I have AWS Cloud Practitioner Certification and currently doing some hand-on in GCP, and also learnt a bit of Terraform in the way. What more skills will I be needing to applying for a job interview.
Can you please share some pointers which I should be focusing on, I am planning to apply actively next year but until then am planning to prepare myself.
Thanks in advance :)
https://redd.it/p36bo6
@r_devops
reddit
Devops Job requirements
Hi folks, Hope you are all doing good. So a little something about me. I just graduated last year July 2020 and am currently working as a Devops...
Creating your VM on the fly with a tool or have it premade?
So one case you use a tool like terraform to start a VM on the cloud then on top of it you install whatever you want (by automation) or have this image hosted with things installed already. The first case takes more time and makes terraform scripts more complicated, do you see other pros and cons?
https://redd.it/p3jyv1
@r_devops
So one case you use a tool like terraform to start a VM on the cloud then on top of it you install whatever you want (by automation) or have this image hosted with things installed already. The first case takes more time and makes terraform scripts more complicated, do you see other pros and cons?
https://redd.it/p3jyv1
@r_devops
reddit
Creating your VM on the fly with a tool or have it premade?
So one case you use a tool like terraform to start a VM on the cloud then on top of it you install whatever you want (by automation) or have this...
How do you structure the hierarchy of your cloud accounts?
As every cloud provider offers some kind of hierarchy to structure your cloud accounts (AWS: Accounts & OUs, Azure: Subscriptions & Management Groups, GCP: Projects & Folders), I'm wondering: what is your strategy for structuring all of these?
Do you also separate different cloud accounts between environments such as dev & prod, or do you do this differently?
How does your preferred structure look like? Per application? Per department? Or otherwise?
I would love to know how you guys approach this.
Disclaimer: I'm currently building an open-source CLI to make it easier to govern clouds, and I'm thinking of including hierarchy structuring as a part of it.
https://redd.it/p3ka7o
@r_devops
As every cloud provider offers some kind of hierarchy to structure your cloud accounts (AWS: Accounts & OUs, Azure: Subscriptions & Management Groups, GCP: Projects & Folders), I'm wondering: what is your strategy for structuring all of these?
Do you also separate different cloud accounts between environments such as dev & prod, or do you do this differently?
How does your preferred structure look like? Per application? Per department? Or otherwise?
I would love to know how you guys approach this.
Disclaimer: I'm currently building an open-source CLI to make it easier to govern clouds, and I'm thinking of including hierarchy structuring as a part of it.
https://redd.it/p3ka7o
@r_devops
GitHub
GitHub - meshcloud/collie-cli: Build and Deploy modular landing zones with collie on AWS, Azure & GCP
Build and Deploy modular landing zones with collie on AWS, Azure & GCP - meshcloud/collie-cli
Do you use client application names to improve debugging and monitoring of your databases?
Hey folks,
Recently, I wrote an article that your database connection deserves a name.
The basic idea: When a client application connects to a database, it should identify itself with a context-related name, as the application's name.
Why does it matter? If you operate a non-microservice environment, you often deal with multiple applications connecting with the same database. From a database perspective, this can be pretty risky. For example, one application can take down the database (via inefficient queries, high query volume, ...) and affecting the other applications by this. In such a scenario, which application is the troublemaker is not obvious and is hard to debug.
With application names assigned, it is nearly a no-brainer.
Other use-cases are rate-limiting, re-routing of queries to other nodes/clusters or per-app monitoring from the database perspective.
Which systems are supporting this?
I know of
- MongoDB
- MySQL
- NATS
- PostgreSQL
- redis
- Oracle
- SQL-Server / MSSQL
and non-database systems like
- RabbitMQ
- everything HTTP based (e.g., REST / GraphQL APIs)
Code, or didn't happen
I prepared full working (Docker-based) examples for the mentioned systems with Go(lang), PHP, and Python. Have a look at andygrunwald/your-connection-deserves-a-name @ GitHub.
What's your call on this?
What do you think about this? Useful? Waste of time?
https://redd.it/p3k3oc
@r_devops
Hey folks,
Recently, I wrote an article that your database connection deserves a name.
The basic idea: When a client application connects to a database, it should identify itself with a context-related name, as the application's name.
Why does it matter? If you operate a non-microservice environment, you often deal with multiple applications connecting with the same database. From a database perspective, this can be pretty risky. For example, one application can take down the database (via inefficient queries, high query volume, ...) and affecting the other applications by this. In such a scenario, which application is the troublemaker is not obvious and is hard to debug.
With application names assigned, it is nearly a no-brainer.
Other use-cases are rate-limiting, re-routing of queries to other nodes/clusters or per-app monitoring from the database perspective.
Which systems are supporting this?
I know of
- MongoDB
- MySQL
- NATS
- PostgreSQL
- redis
- Oracle
- SQL-Server / MSSQL
and non-database systems like
- RabbitMQ
- everything HTTP based (e.g., REST / GraphQL APIs)
Code, or didn't happen
I prepared full working (Docker-based) examples for the mentioned systems with Go(lang), PHP, and Python. Have a look at andygrunwald/your-connection-deserves-a-name @ GitHub.
What's your call on this?
What do you think about this? Useful? Waste of time?
https://redd.it/p3k3oc
@r_devops
Andygrunwald
your database connection deserves a name - Andy Grunwald
Assigning a name to your database connection can lower your time to debug. We provide an overview of how to do this for various database systems and programming languages.
How to take docker-compose to production?
I have a node.js and postgresql app in docker-compose that I want to take to production and expose to the Internet. What options do I have?
I would prefer to keep it simple and not have to configure or learn different services of cloud providers. IMO it should be transparent. But maybe you guys can tell me better.
Do I need to go for K8 or Service Mesh or aggregate multiple cloud services together?
https://redd.it/p3k2q2
@r_devops
I have a node.js and postgresql app in docker-compose that I want to take to production and expose to the Internet. What options do I have?
I would prefer to keep it simple and not have to configure or learn different services of cloud providers. IMO it should be transparent. But maybe you guys can tell me better.
Do I need to go for K8 or Service Mesh or aggregate multiple cloud services together?
https://redd.it/p3k2q2
@r_devops
reddit
How to take docker-compose to production?
I have a node.js and postgresql app in docker-compose that I want to take to production and expose to the Internet. What options do I have? I...
How did you find your VDS/VPS-hoster?
Hi Guys!
Please select the option:
View Poll
https://redd.it/p2yx9b
@r_devops
Hi Guys!
Please select the option:
View Poll
https://redd.it/p2yx9b
@r_devops
reddit
How did you find your VDS/VPS-hoster?
Hi Guys! Please select the option:
Using secrets in kube prom stack helm chart
Hey guys. Coincidentally, I was trying to configure alert manager using the kube prom stack helm chart and saw another post along similar lines.
Does anyone have ideas on how I could reference a secret in the values.yaml ? I have a K8s secret created which contains the slack webhook url
...
config:
global:
resolve_timeout: 5m
route:
...
receivers:
- name: 'slack-test'
slack_configs:
- api_url: <<slackApiUrl>>
It works if I have the url pasted directly, but would be good to retrieve the value from the K8s secret that's deployed. Any pointers appreciated!
https://redd.it/p2wpmw
@r_devops
Hey guys. Coincidentally, I was trying to configure alert manager using the kube prom stack helm chart and saw another post along similar lines.
Does anyone have ideas on how I could reference a secret in the values.yaml ? I have a K8s secret created which contains the slack webhook url
...
config:
global:
resolve_timeout: 5m
route:
...
receivers:
- name: 'slack-test'
slack_configs:
- api_url: <<slackApiUrl>>
It works if I have the url pasted directly, but would be good to retrieve the value from the K8s secret that's deployed. Any pointers appreciated!
https://redd.it/p2wpmw
@r_devops
reddit
Using secrets in kube prom stack helm chart
Hey guys. Coincidentally, I was trying to configure alert manager using the kube prom stack helm chart and saw another post along similar...
Update on CircleCI Config
Yesterday I submitted this post asking for advice on my CI config: https://www.reddit.com/r/devops/comments/p2y21u/adviceoncircleciconfig/
I am pretty happy with it now, but would like to hear if you have any other suggestions. Here are the changes I have made due to suggestions from that first post:
I created a `compose.test.yaml` file with:
services:
flask:
build:
context: ./flask
dockerfile: Dockerfile
image: myapp/flask
volumes:
- ./testresults:/testresults
environment:
FLASKAPP: "manage.py"
FLASKENV: "test"
FLASKCONFIG: "test"
TESTDATABASEURL: "postgresql://runner:runner@db:5432/circletest"
command: pytest "app/tests" --cov="app" -p no:warnings --junitxml=/testresults/junit.xml
dependson:
- db
db:
image: circleci/postgres:13-postgis
environment:
- POSTGRESUSER=runner
- POSTGRESPASSWORD=runner
New `config.yml`:
build:
machine:
image: ubuntu-2004:202107-02
steps:
- checkout
- run:
name: Create Results Directory
command: mkdir testresults && chmod 777 testresults
- run:
name: Building Containers
command: make testbuild
- run:
name: Running Tests
command: make test
- storetestresults:
path: testresults
- storeartifacts:
path: testresults
I created a `Makefile` with:
testbuild:
@echo "Running Test - Build"
docker-compose -p mytest -f compose.test.yaml build
test:
@echo "Running pytest"
docker-compose -p mytest -f compose.test.yaml up --exit-code-from flask
https://redd.it/p3mk1s
@r_devops
Yesterday I submitted this post asking for advice on my CI config: https://www.reddit.com/r/devops/comments/p2y21u/adviceoncircleciconfig/
I am pretty happy with it now, but would like to hear if you have any other suggestions. Here are the changes I have made due to suggestions from that first post:
I created a `compose.test.yaml` file with:
services:
flask:
build:
context: ./flask
dockerfile: Dockerfile
image: myapp/flask
volumes:
- ./testresults:/testresults
environment:
FLASKAPP: "manage.py"
FLASKENV: "test"
FLASKCONFIG: "test"
TESTDATABASEURL: "postgresql://runner:runner@db:5432/circletest"
command: pytest "app/tests" --cov="app" -p no:warnings --junitxml=/testresults/junit.xml
dependson:
- db
db:
image: circleci/postgres:13-postgis
environment:
- POSTGRESUSER=runner
- POSTGRESPASSWORD=runner
New `config.yml`:
build:
machine:
image: ubuntu-2004:202107-02
steps:
- checkout
- run:
name: Create Results Directory
command: mkdir testresults && chmod 777 testresults
- run:
name: Building Containers
command: make testbuild
- run:
name: Running Tests
command: make test
- storetestresults:
path: testresults
- storeartifacts:
path: testresults
I created a `Makefile` with:
testbuild:
@echo "Running Test - Build"
docker-compose -p mytest -f compose.test.yaml build
test:
@echo "Running pytest"
docker-compose -p mytest -f compose.test.yaml up --exit-code-from flask
https://redd.it/p3mk1s
@r_devops
reddit
Advice on CircleCI config
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it? The app is...
DevOps Bulletin - Digest 15 is here 🔥
Hey folks!
This week digest covers the following topics:
* NSA Kubernetes Hardening guidance
* Building a CDN from scratch in 5 hours
* Docker container security cheat sheet
The full digest is available here: [https://issues.devopsbulletin.com/issues/kubernetes-hardening-guidance-by-nsa.html](https://issues.devopsbulletin.com/issues/kubernetes-hardening-guidance-by-nsa.html)
https://redd.it/p3o7hi
@r_devops
Hey folks!
This week digest covers the following topics:
* NSA Kubernetes Hardening guidance
* Building a CDN from scratch in 5 hours
* Docker container security cheat sheet
The full digest is available here: [https://issues.devopsbulletin.com/issues/kubernetes-hardening-guidance-by-nsa.html](https://issues.devopsbulletin.com/issues/kubernetes-hardening-guidance-by-nsa.html)
https://redd.it/p3o7hi
@r_devops
Devopsbulletin
Devops Bulletin
Digest #15: Kubernetes Hardening Guidance by NSA
Permission Denied on EFS mounted to SFTP server?
# TL;DR - questions
Customer files are failing to upload - depends on the day where they fail, and which files - some come through fine, others fail with partial upload. This is only specific to one customer, so I suspect that it's their custom SFTP software or their network QoS that might be causing the issues - but I don't have a good way to prove where the issue is based on error messages.
context: No other customers are having this issue - it is unique to a specific vendor, and happens for any of their users and nobody else - so I'm 9000% sure it's on their end.
Logs are posted below, and I've got some questions:
1. Is the Permission denied error here just a shitty error message for something else that's happening
2. What's a good way to pinpoint where/why the process\_write: write failed error message is happening that appears to precede their disconnection from the server
3. What the hell else can I look at to identify why the process\_write failure might be happening or to identify where in the stack the error is being caused from?
# More details:
Customer is trying to upload data to our SFTP server. They keep getting partway through a large file and then our SFTP server shows a permission denied log message but still has written the partial upload.
We don't do IP whitelisting to this server - it's an EC2 instance with a public IP and I've verified ports are open to all on the security group, so there's nothing between the server and it's connection - it's just a direct customer connection to the server.
Jul 29 12:12:24 use-prod-transfer1 sshd[32031]: Accepted password for customer-user from 192.168.29.123 port 9172 ssh2
Jul 29 12:12:24 use-prod-transfer1 systemd-logind[1094]: New session 18865 of user customer-user.
Jul 29 12:12:24 use-prod-transfer1 internal-sftp[32088]: session opened for local user customer-user from [192.168.29.123]
Jul 29 12:12:24 use-prod-transfer1 internal-sftp[32088]: received client version 3
Jul 29 12:12:24 use-prod-transfer1 internal-sftp[32088]: realpath "."
Jul 29 12:12:24 use-prod-transfer1 internal-sftp[32088]: open "/writeable/customer-file" flags WRITE,CREATE,TRUNCATE mode 0666
--
Jul 29 13:00:40 use-prod-transfer1 internal-sftp[32088]: error: process_write: write failed
Jul 29 13:00:40 use-prod-transfer1 internal-sftp[32088]: sent status Permission denied
Jul 29 13:00:40 use-prod-transfer1 internal-sftp[32088]: close "/writeable/customer-file" bytes read 0 written 1305450000
Jul 29 13:00:40 use-prod-transfer1 internal-sftp[32088]: session closed for local user customer-user from [192.168.29.123]
Jul 29 13:00:40 use-prod-transfer1 systemd-logind[1094]: Removed session 18865.
I verified that their user owns the directory that we're having them drop into via chroot:
root@ip-10-1-2-3:/sftp-home/customer-user# ls -lah
total 56K
drwxr-xr-x 5 root etl 6.0K May 1 2020 .
drwxr-xr-x 874 root root 38K Aug 11 13:17 ..
drw-r----- 18 root root 6.0K Aug 3 13:00 archive
drwxr-sr-x 2 root etl 6.0K Jun 24 10:23 dev
-rw-r--r-- 1 root etl 767 Feb 4 2020 README.txt
drwxrwxr-x 3 customer-user sftp-only 6.0K Aug 3 14:00 writeable
root@ip-10-1-2-3:/sftp-home/customer-user#
And our chroot config in sshd:
Match LocalPort 22
ForceCommand internal-sftp -l VERBOSE -f LOCAL6
AllowGroups sftp-only
PasswordAuthentication yes
RSAAuthentication no
X11Forwarding no
AllowTcpForwarding no
ChrootDirectory /sftp-home/%u
The directory /sftp-home/%u is folder on an AWS EFS filesystem with bursting allowed, and no restrictions (which we probably should have, but at least it's not part of this problem).
​
TCP dumps from our end yielded no insight - it was literally just packets being sent, and then no more packets being sent at the time of the error message and client disconnect, with
# TL;DR - questions
Customer files are failing to upload - depends on the day where they fail, and which files - some come through fine, others fail with partial upload. This is only specific to one customer, so I suspect that it's their custom SFTP software or their network QoS that might be causing the issues - but I don't have a good way to prove where the issue is based on error messages.
context: No other customers are having this issue - it is unique to a specific vendor, and happens for any of their users and nobody else - so I'm 9000% sure it's on their end.
Logs are posted below, and I've got some questions:
1. Is the Permission denied error here just a shitty error message for something else that's happening
2. What's a good way to pinpoint where/why the process\_write: write failed error message is happening that appears to precede their disconnection from the server
3. What the hell else can I look at to identify why the process\_write failure might be happening or to identify where in the stack the error is being caused from?
# More details:
Customer is trying to upload data to our SFTP server. They keep getting partway through a large file and then our SFTP server shows a permission denied log message but still has written the partial upload.
We don't do IP whitelisting to this server - it's an EC2 instance with a public IP and I've verified ports are open to all on the security group, so there's nothing between the server and it's connection - it's just a direct customer connection to the server.
Jul 29 12:12:24 use-prod-transfer1 sshd[32031]: Accepted password for customer-user from 192.168.29.123 port 9172 ssh2
Jul 29 12:12:24 use-prod-transfer1 systemd-logind[1094]: New session 18865 of user customer-user.
Jul 29 12:12:24 use-prod-transfer1 internal-sftp[32088]: session opened for local user customer-user from [192.168.29.123]
Jul 29 12:12:24 use-prod-transfer1 internal-sftp[32088]: received client version 3
Jul 29 12:12:24 use-prod-transfer1 internal-sftp[32088]: realpath "."
Jul 29 12:12:24 use-prod-transfer1 internal-sftp[32088]: open "/writeable/customer-file" flags WRITE,CREATE,TRUNCATE mode 0666
--
Jul 29 13:00:40 use-prod-transfer1 internal-sftp[32088]: error: process_write: write failed
Jul 29 13:00:40 use-prod-transfer1 internal-sftp[32088]: sent status Permission denied
Jul 29 13:00:40 use-prod-transfer1 internal-sftp[32088]: close "/writeable/customer-file" bytes read 0 written 1305450000
Jul 29 13:00:40 use-prod-transfer1 internal-sftp[32088]: session closed for local user customer-user from [192.168.29.123]
Jul 29 13:00:40 use-prod-transfer1 systemd-logind[1094]: Removed session 18865.
I verified that their user owns the directory that we're having them drop into via chroot:
root@ip-10-1-2-3:/sftp-home/customer-user# ls -lah
total 56K
drwxr-xr-x 5 root etl 6.0K May 1 2020 .
drwxr-xr-x 874 root root 38K Aug 11 13:17 ..
drw-r----- 18 root root 6.0K Aug 3 13:00 archive
drwxr-sr-x 2 root etl 6.0K Jun 24 10:23 dev
-rw-r--r-- 1 root etl 767 Feb 4 2020 README.txt
drwxrwxr-x 3 customer-user sftp-only 6.0K Aug 3 14:00 writeable
root@ip-10-1-2-3:/sftp-home/customer-user#
And our chroot config in sshd:
Match LocalPort 22
ForceCommand internal-sftp -l VERBOSE -f LOCAL6
AllowGroups sftp-only
PasswordAuthentication yes
RSAAuthentication no
X11Forwarding no
AllowTcpForwarding no
ChrootDirectory /sftp-home/%u
The directory /sftp-home/%u is folder on an AWS EFS filesystem with bursting allowed, and no restrictions (which we probably should have, but at least it's not part of this problem).
​
TCP dumps from our end yielded no insight - it was literally just packets being sent, and then no more packets being sent at the time of the error message and client disconnect, with
nothing but successful transfers for almost an hour of tcp dumps. No retransmits, so the network connection is clean - no failed keepalives - everything looks like a perfectly happy connection until it goes bye bye.
As far as I can tell, there is nothing that we have configured that's unique for this user - and I'm at a loss for how to prove it's not us when the log messages don't give any indication of what the hell is actually failing.
Help?
https://redd.it/p3pk29
@r_devops
As far as I can tell, there is nothing that we have configured that's unique for this user - and I'm at a loss for how to prove it's not us when the log messages don't give any indication of what the hell is actually failing.
Help?
https://redd.it/p3pk29
@r_devops
reddit
Permission Denied on EFS mounted to SFTP server?
# TL;DR - questions Customer files are failing to upload - depends on the day where they fail, and which files - some come through fine, others...
Two Bitbucket cloud migration questions
I already asked about this in the Bitbucket community forums, but on the chance that somebody here has some concrete information, I'll ask.
1. Currently, the Bitbucket migration path from local server to cloud allows for migration of repository data, but not repository metadata. That means users can migrate code, tags, and branches, but not pull requests and comments. According to the Bitbucket webpages, that may happen, if enough users express interest -- but the webpages said that in November 2020, and no change since then. Does anybody have any updated information about that? The PRs and comments are important to us.
2. The Bitbucket webpages have also been promising a Bitbucket Cloud Migration Assistant since November 2020 as well. If you sign up for the early access program, they promise to send you updates on when the Migration Assistant will be available. So far, the only availability date I can find is Real Soon Now^(TM). Does anybody have newer news about the Migration Assistant?
https://redd.it/p3pbf8
@r_devops
I already asked about this in the Bitbucket community forums, but on the chance that somebody here has some concrete information, I'll ask.
1. Currently, the Bitbucket migration path from local server to cloud allows for migration of repository data, but not repository metadata. That means users can migrate code, tags, and branches, but not pull requests and comments. According to the Bitbucket webpages, that may happen, if enough users express interest -- but the webpages said that in November 2020, and no change since then. Does anybody have any updated information about that? The PRs and comments are important to us.
2. The Bitbucket webpages have also been promising a Bitbucket Cloud Migration Assistant since November 2020 as well. If you sign up for the early access program, they promise to send you updates on when the Migration Assistant will be available. So far, the only availability date I can find is Real Soon Now^(TM). Does anybody have newer news about the Migration Assistant?
https://redd.it/p3pbf8
@r_devops
reddit
Two Bitbucket cloud migration questions
I already asked about this in the Bitbucket community forums, but on the chance that somebody here has some concrete information, I'll ask. 1....
Intelligent synchronization between servers for debian
I am looking for a program for debian that would track the use of files in a selected location on server A and, on this basis, would be able to select the data that are most frequently used and should be synchronized with server B. Something like intelligent synchronization. Do you know such a program?
https://redd.it/p3rh5m
@r_devops
I am looking for a program for debian that would track the use of files in a selected location on server A and, on this basis, would be able to select the data that are most frequently used and should be synchronized with server B. Something like intelligent synchronization. Do you know such a program?
https://redd.it/p3rh5m
@r_devops
reddit
Intelligent synchronization between servers for debian
I am looking for a program for debian that would track the use of files in a selected location on server A and, on this basis, would be able to...
Can you give the CI/CD Tools to Learn if you want to be a DevOps engineer?
From where to start ?
https://redd.it/p2uyv4
@r_devops
From where to start ?
https://redd.it/p2uyv4
@r_devops
reddit
Can you give the CI/CD Tools to Learn if you want to be a DevOps...
From where to start ?
Is your devops just ops automation?
Been in software for along time.
Remember when DevOps came out.. we talked a lot about it being a culture not a team.
Seems like the current form is that devops is a team?
Has DevOps really just become a automation for ops team?
Do your teams of "devs" not know how the prod systems work... they just bang out code with no notion of how it does what it does once they commit or issue a PR?
https://redd.it/p3ulan
@r_devops
Been in software for along time.
Remember when DevOps came out.. we talked a lot about it being a culture not a team.
Seems like the current form is that devops is a team?
Has DevOps really just become a automation for ops team?
Do your teams of "devs" not know how the prod systems work... they just bang out code with no notion of how it does what it does once they commit or issue a PR?
https://redd.it/p3ulan
@r_devops
reddit
Is your devops just ops automation?
Been in software for along time. Remember when DevOps came out.. we talked a lot about it being a culture not a team. Seems like the current form...
Should you use AWS Route 53 for both your domains and subdomains or use it only for one of them?
We have a domain on GoDaddy and planning on routing the traffic to it through Route 53 and later will be creating subdomains using Route 53 too. So I wanted to know what are the pros and cons and also the security concerns for both scenarios of using GoDaddy for only the domain and Route 53 for subdomains and vice versa.
https://redd.it/p3ud4n
@r_devops
We have a domain on GoDaddy and planning on routing the traffic to it through Route 53 and later will be creating subdomains using Route 53 too. So I wanted to know what are the pros and cons and also the security concerns for both scenarios of using GoDaddy for only the domain and Route 53 for subdomains and vice versa.
https://redd.it/p3ud4n
@r_devops
reddit
Should you use AWS Route 53 for both your domains and subdomains...
We have a domain on GoDaddy and planning on routing the traffic to it through Route 53 and later will be creating subdomains using Route 53 too....
Monitor GitHub Pull Requests with Prometheus
I developed a new exporter so that we can get more insight into hacktoberfest contributions within my company this year. We're also thinking of giving prizes for top three contributors.
I hope others will do the same and find this useful!
https://dev.to/circa10a/monitoring-github-pull-requests-with-prometheus-57p2
https://redd.it/p217ut
@r_devops
I developed a new exporter so that we can get more insight into hacktoberfest contributions within my company this year. We're also thinking of giving prizes for top three contributors.
I hope others will do the same and find this useful!
https://dev.to/circa10a/monitoring-github-pull-requests-with-prometheus-57p2
https://redd.it/p217ut
@r_devops
DEV Community
Monitoring GitHub Pull Requests with Prometheus
The problem Have you ever wanted to track your open source contributions? Or perhaps...