Reddit DevOps
270 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Dumb Question: Boost the capacity of my phone to detect a wifi ?

Hey guys , it might be a dumb question but is it possible to boost the capacity of my phone to detect a wifi by using an app ?
I'm trying to convince my friend that it is not possible.

https://redd.it/zshodg
@r_devops
Realistic data for load tests

Are there any load testing platforms/libraries that can automatically generate unique data (ex: query params, basic Json body data) for each API request in a larger load test?

I do have some existing logged request data, are there any platforms that could sample from an existing dataset to populate a load test?

https://redd.it/zrr3et
@r_devops
Can sysadmin install app through my connection to wifi's company ?

I realize i have a new folder name 'linux' inside that have 2 folder 'docker-desktop' and 'docker-desktop-data'.

I think sysadmin can use it to block my specific service and track my interaction or screen record from remote (sound violate my personal). I curious any app can actually do it.

If it true about the app, can somebody please recommend me link to uninstall it ?

Thank for reading.

https://redd.it/zshua4
@r_devops
How run minio on docker-compose + nginx reverse proxy?

I have problem with minio, not started on selected domain - 502 error.
- my docker-compose.yml for nginx proxy reverse + le
services:
nginx:
container_name: nginx
image: nginxproxy/nginx-proxy
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/docker/nginx/html:/usr/share/nginx/html
- /var/docker/nginx/certs:/etc/nginx/certs
- /var/docker/nginx/vhost:/etc/nginx/vhost.d
logging:
options:
max-size: "10m"
max-file: "3"

letsencrypt-companion:
container_name: nginx-le
image: jrcs/letsencrypt-nginx-proxy-companion
restart: unless-stopped
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/docker/nginx/acme:/etc/acme.sh
environment:
DEFAULT_EMAIL: [email protected]

- docker-compose.yml for minio
version: '2'

services:
minio:
container_name: minio.domain.com
command: server /data --console-address ":9001"
environment:
- MINIO_ROOT_USER=admin
- MINIO_ROOT_PASSWORD=supersecret
- MINIO_BROWSER_REDIRECT_URL=https://minio.domain.com
- MINIO_DOMAIN=minio.domain.com
image: quay.io/minio/minio:latest
volumes:
- minio:/data
restart: unless-stopped
expose:
- "9000"
- "9001"
environment:
VIRTUAL_HOST: minio.domain.com
LETSENCRYPT_HOST: minio.domain.com
networks:
- proxy

networks:
proxy:
external:
name: nginx_default

volumes:
minio:

- logs from docker logs for minio container
Warning: Default parity set to 0. This can lead to data loss.
WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
MinIO Object Storage Server
Copyright: 2015-2022 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2022-12-12T19-27-27Z (go1.19.4 linux/amd64)

Status: 1 Online, 0 Offline.
API: https://192.168.0.7:9000 https://127.0.0.1:9000
Console: https://192.168.0.7:9001 https://127.0.0.1:9001

Documentation: https://min.io/docs/minio/linux/index.html


When I put in docker-compose for minio:
   ports:
- '9000:9000'
- '9001:9001'


Minio working, but for all domain on my server.
How I can fix that minio show only on minio.domain.com ?

https://redd.it/zrm101
@r_devops
Do you enjoy being in DevOps?

I would especially be interested in hearing from people who came from a general Systems / Network Administration background.

Rather than make the same post about how to switch from one field to DevOps, I am interested in how you all feel about being in the field. I understand YMMV depending on the role and company.

When did you feel like your programming knowledge was sufficient enough to make the leap?

https://redd.it/zr8sgs
@r_devops
How to automate: restart service on a remote machine

Hello, hopefully a devops question.

We have one old service that fails from time to time, it is not critical enough to fix it and it does not fail often, so a `systemd restart` is sufficient. It is a Linux VM.

Current implementation: logging into elasticsearch and elastalert monitors for failure event, and once such event is detected, elastalert can execute API call to Ansible AWX. Ansible then runs a playbook to restart the failing service.

Considering to stop using AWX for this and I am looking for another approach..

Here are my options:

1. execute command via ssh. Secure and simple enough but I need to keep SSH key on the elastalert container for root or limited access to systemd.
2. create a small ansible playbook on elastalert container to run it against failing server.
3. use gRPC, this tutorial makes it look fairly simple. But is it the right tool to use for this case?
4. run flask app on target server to listen for API events.

In all cases above, I need to add stuff into docker container image, or load files from the host via volume mounts. Also step 3 and 4 are safer in a way as I can program them to run limited set of commands, in my case - restart just one service. First two options are less secure as systemd restart needs root access, but I might be able to limit that in `sudoers` config.

Elastalert supports many actions when event is detected, two of them: run a command or make HTTP POST. Ideally, as elastalert supports http/http2 post,it would make it easier to use this option to make API call.

Is there another, possibly standard way which I might not know? I might want to expand this to more than just one service, and use it to make a sort of self-healing self hosted infrastructure.

https://redd.it/zwevvp
@r_devops
react js application in s3 bucket

Is it possible to host a React JS application in an S3 bucket?

I want to deploy a React Js web application in an S3 bucket that will call an AWS Lambda function. Is it feasible?

My doubt is, since React JS is a dynamic scripting language, can this be hosted in an S3 bucket? Can React JS call a Lambda function endpoint?

https://redd.it/zwe0ad
@r_devops
Helm-Dashboard now enables cluster installation

A few months ago, we at Komodor released a new open-source project called Helm-Dashboard, which got a lot of positive feedback and attention from the community. I’m happy to share that now Helm-Dashboard can be installed both locally AND on a cluster.

It’s basically a GUI for Helm, designed to solve some of the more acute pain points of Helm users by visualizing changes in Helm charts. The goal is to help beginner Helm users to get started with Helm, and for more experienced users to speed up operations. The new cluster installation capability would enable users to collaborate better and share the same view of their charts.

Check it out on GitHub: https://github.com/komodorio/helm-dashboard

Feel free to join our Slack Kommunity: https://join.slack.com/t/komodorkommunity/shared_invite/zt-1dm3cnkue-ov1Yh\~_95teA35QNx5yuMg

Give it a ⭐️ if you liked it :)

https://redd.it/zwg7wy
@r_devops
User lifecycle management and IaC

Wanted to know how people are managing user lifecycles in a way that is compatible with IaC. For example we use Okta for provisioning and managing users but Terraform for basically everything else and have found that trying to keep our Terraform up to date with user churn is a challenge for tools like PagerDuty and others where the list of users is important but consistently changing.

https://redd.it/zwg7lt
@r_devops
ARGOCD app not identifying resources

Hi,

I am trying to use the sample app, from the documentation and I cannot figure out why its not identifying the underlying resources.

I tried "refresh", "hard refresh" checked the logs but all seems ok... even reinstalled argo

Any pointers would be appreciated.

https://redd.it/zwdp9b
@r_devops
Does your team do sprints this week when half the team is out for the holidays?

*jokingly* suggested we just have a few days of learning time this week instead of starting another sprint, but that was shot down..

Oh well.. march forward! AGILITY!

https://redd.it/zwi9iu
@r_devops
Enterprise Mobility and DevOps Combine to Increase Productivity and Agility

DevOps solutions can take enterprise mobility to the next level by increasing speed and customizability. Here are the top four ways **DevOps remains a game-changer for mobility solutions**. Let’s connect to discuss.

https://redd.it/zx1kdm
@r_devops
Conflicted on which position to pursue

I have been working with 2 teams in my org. the last few months.
One of the teams is mostly on-prem, comprised of Sys. Engineers who manage servers (databases, web servers, etc) with plans to transition to cloud and adopt more of a DevOpsy workflow (already use Ansible, will be building out more automation, adopt DevOps principles/perspectives). I like the vibe of this team, and they seem understanding of the fact that I am fairly new to DevOps/Ops.

The 2nd team is comprised of Devs who work entirely with AWS (lambdas, kinesis streams, DMS, some others). This team works with a few of our larger internal products/processes. I have been named the Terraform SME as that has comprised the majority of the work I have been doing with that team. I think the expectation if I join this team is that I will be entirely in charge of all of our AWS resource deployments via Terraform. Also have been building out some Azure DevOps pipelines to automate this.

I started working with the 1st team before the 2nd but have done ~4x as much work with the 2nd team, on top of the duties associated with my normal role. I think this is in part to the large amount of scope creep I experience with the 2nd team, whereas the first team is understanding of the fact that I can only work with them on a part-time basis. I do enjoy working in AWS, but I worry that I may become overwhelmed being the “SME” with the 2nd team. I think the 1st team has a ton of growth potential long term with a gradual ramp up in terms of what I will be doing , whereas the 2nd team is already in a spot where I can get my hands dirty, but the expectation is that I know what I am doing from day one.

If you have any insights or advice please share, and if you need more context let me know

https://redd.it/zwvvda
@r_devops
Idea for self-provisioning test servers - brilliant or bllsht?

I need to retrofit the provision of ephemeral servers into existing test pipelines. Creating and managing the servers isn't a problem - it's finding the best way to integrate this functionality into the existing "Test Rails" framework.

My first idea was to modify the tests so they can make a REST call to a resource manager as the first step. This is only practical if Test Rails is modular and we can easily add a step - we can't modify thousands of individual tests.

My second idea is that we can "door knock" approach. The tests would continue using the existing account details but the DNS would now point to a proxy that's listening to the appropriate ports. When it sees a connection on port 5432 it would launch a postgresql database (or pull one from an existing pool) and either act as a traditional proxy and/or play some games with the TCP/IP packets so the client and server can talk directly.

There's a significant downside - we would need to bump the connection timeouts from seconds to minutes. The "timeout" will usually reflect the time to a meaningful message, not the time to a successful TCP/IP handshake.

We could avoid this delay by keeping a pool of 'hot' servers but that defeats the goal of cost reduction by only running the servers while they're in use. But this could be negotiated, e.g., we have a 'hot' pool during "regular business hours" and shut it down on the weekend. (It's in quotes because the people most concerned about costs also tend to forget that we're an international team and there is no "overnight" and even the "weekend" is only about 36 hours from the last person working late on Friday to the first person starting work early on Monday.)

My question - is this a brain-dead idea? If not are there already solutions to the problem?

For what it's worth I'm a java dev pulled into devops since I'm the type to set up servers in a home lab for fun. I know ansible, am learning terraform, etc., but when I think of a proxy like this I still think in terms of a java application, spring REST, etc., even if I have an NGINX frontend to that app.

https://redd.it/zwm81e
@r_devops
Best way to run k8s apps locally

I have set up pipelines for deployment to k8s for different environments, and the developers are happy. But how do I enable them to easily run our applications for development locally? We have 10 ish apps running in k8s and they all depend on each other. To develop on one locally, you often need to have at least one or two of the others running at the same time, sometimes all. All apps are Scala-based and have a Dockerfile in their repo root.

&#x200B;

Are there any best practices for this? Was thinking of maybe using docker-compose or local k8s cluster (seems overly complicated for every dev though)

https://redd.it/zx6g75
@r_devops
Need some help deploying a Docker stack to AWS

Hello!

I have a small app that I've written that I'm trying to split across multiple machines. I've been using Docker compose to simulate this locally and now need to figure out how to deploy it on AWS.

The app consists of:

* a 'main' node that sends commands to 'worker' machines.
* *n* number of workers to any 'main' node
* 'workers' are exclusive to a 'main' node and can not be shared

Any idea on where I'd start with this? I was looking into using ECS but I'm a total AWS noob.

Thanks in advance!

https://redd.it/zwq6ik
@r_devops
Certificate Ripper v2 released - tool to extract server certificates

Hello everyone, today I have released version 2 of certificate ripper which includes the following new features:

Support for proxy with authentication
Exporting certificates as binary file (DER) and base64 encoded (PEM)
Exporting all certificates aka chain of a single url as a single file.
Specifying a custom file name for the exported files

It is an easy to use cli tool to extract the full chain of any server/website. The end user can inspect any sub fields and details easily on the command line. The native executables are available in the releases section see here: https://github.com/Hakky54/certificate-ripper/releases

Feel free to share your feedback or new idea's I will appreciate it:)

See here for the github repo: GitHub - Certificate Ripper

https://redd.it/zwvr1f
@r_devops
Establish a autonomy in your work

Hi guys,

And merry Christmas!

Another controversial subject that I would love to hear advices and tips'n tricks of fellow mindset ppl!
I know as DevOps or even system admin is not easy to have always a autonomy in how you work etc since stuff like team work is often needed and sync also with other teams etc but currently I am working in some nice Jenkins pipeline and I enjoy how I can work with a autonomy and with my own pace avoiding also unnecessary morning meetings or the teeth of management lol.
So I'd love to hear from the most experienced guys how you avoid fucked/messy projects (another poor guy took part on a very annoying messy immature product project and still I feel so sorry how it happened this to him and he swears that he ll make sure it won't happen at least to him ever again) and how you establishing boundaries and working with a autonomy...

https://redd.it/zwlq0u
@r_devops
Squid proxy service on docker with multiple ip on this same interface

I using squid on docker, and have problem with connect to other site by selected ip.
I always connected by default host ip, not additional failover ip.

My setup:

a) server
-dedicated server on ovh.org
-1 dedicated ip from server, and 6 additional by ovh service 'failover ip'
-each failover ip added to main interface, and I have on main eno1 interface has 7 ip.
-i added all failover-ip by this guide on ovh.org

b) problem
-I added to squid.conf my failover ip, but when I connect to this ip remote and using squid, I always using host ip, not additionaly. What is wrong?
-my gist setup docker-compose, and squid.conf
https://gist.github.com/mxcdh/22baa3d7fa2d9dcb2279520b81d71afa


p.s
When I logged to host, not on squid on docker, and put in terminal:
curl --interface ip-failover-1 icanhazip.com
ip-failover-1-results

It's working, but on squid no.

https://redd.it/zwjnoy
@r_devops