Reddit DevOps
269 subscribers
11 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Migrate from Jfrom Artifactory to Gitlab

So we are looking at shutting down our artifacotry server. I have been searching for instructions on how to import it to our gitlab server to host all of the historical info. I would like to keep as much of the meta data as possible.

Has anyone done this, or can you suggest anything? Thanks

https://redd.it/108n8ts
@r_devops
How to scale artefactory for 2000-3000 users? Possible?

Hey, folks!

Problem statements:

We have to design a public-facing artefactory in such a way that at one time 2000-3000, Devs/DevOps might run their operations on it. And, if it increases more then it might have to scale smoothly. I am not sure how to get started with this but the max that I have seen it 100 people accessing our JFrog servers.

Currently, we are exploring JFrog only, but, do we have more options? Can anyone share their experience with respect to this?

Thanks!

https://redd.it/108yp2b
@r_devops
Convert MySQL query output to any formats fast

I have created a useful tool website, it can help you to convert MySQL query output to a variety of commonly used formats such as Markdown, Magic, LaTeX, SQL, HTML, CSV, Excel, JSON, JSONLines, ASCII, MediaWiki, AsciiDoc, Qlik, DAX, Firebase, YAML, XML, Jira, Textile, reStructuredText, TracWiki, BBCode, etc.

Using this tool is very simple, it only takes three steps:

1. Execute a query in MySQL, and copy the output to the clipboard.

2. Paste the output on our website, and if desired, adjust the data using the online editing function.

3. Select the format you want and convert it. The tool will automatically complete the conversion and generate a file.

This tool is that it runs in the browser, so there is no need of any API service, your data will be completely safe as it will only exist in your browser.

This tool is perfect for professions such as data analysts, developers, data scientists, etc. It can help them to process and analyze data faster, reduce manual work and improve work efficiency.

I want to promote my tool website here, and hope that more people can know about it and use it in their own work. If you are interested, please visit our website at https://tableconvert.com/mysql-to-csv for more information.

https://redd.it/108zmka
@r_devops
How To Stress Test Distributed Infra Application Setup?

So guys, we've a setup with 2 EC2, 1 for API and 2nd for Scrapping, One ElastiCache Redis and One RDS.
So when we do search operation on API server it creates a request via Scrapper and do some Scrapping and loads the data into Redis and DB which is further fetched by API server and displays the results on the webpage.

All these works simultaneously and perfect and i think this can handle normal traffic because of powerful instance usage for these services. Now if i want to stress test the whole architecture what will be the best way so i can understand which service will be bottleneck if the load increase so that we can work on that part and increase it's capacity if needed. Because we'll be getting 30K users loads and i want to be make sure the architecture handles it well and if needed we need to setup autoscaling for some services. Hope I'm clear on my query. Please help me with this, how can i stress test this architecture.

Thanks!

https://redd.it/1092ii2
@r_devops
Amazon EKS with Terraform. Use EKS module or AWS provider to build a cluster?

Which is more common for building out an EKS cluster? The motivation for this question is I am trying to use Karpenter, and the examples show using the EKS module.

I prefer using a resource from a provider over a standalone module for Terraform, but Karpenter is not working with the provider.

https://redd.it/10961yd
@r_devops
Legitify supports scanning GitLab for security misconfigurations and best practices

https://github.com/Legit-Labs/legitify/releases/tag/v0.2.0
legitify v0.2.0 includes:

1. Scanning GitLab Cloud/Server and GitHub Enterprise for security misconfigurations and best practices
2. Custom GitHub Action to include scanning in CI/CD processes

I hope you find this useful and appreciate any feedback on the project!

https://redd.it/1093cz3
@r_devops
No-Code Status Pages

If your clients are in need of a cost-effective and functional no-code option for their status pages, you may suggest they consider using Pulsetic status pages.

Released today, and here are more details https://pulsetic.com/status-pages/

https://redd.it/1091xmc
@r_devops
Tagging releases in AWS Fargate

We have created a release-config.properties file where we manually update the date so that we can tag our releases and keep track of them. This works for us. But I want to automate this process instead of manually updating in GitHub before merging the PR. How can I do this? or is there any other method to do this?

https://redd.it/108ycm4
@r_devops
Artifactory Pypi repo uploads in offline environment

I have a janky environment where I have an Artifactory instance running a Pypi repo on a network with no internet connection. I'm trying to upload some Python3 packages and was able to get the wheel files using pip3 download on another machine. I can successfully install them locally with pip install --no-index /dir/<packagename>.whl/tar.gz but want to make it so I can just install the packages from a requirements.txt and point pip to my corporate artifactory pypi repo.

I setup a .pypirc file and have verified that I can auth to my Artifactory pypi repo. Where I'm stuck is with understand what I need to do to upload public packages to my Artifactory pypi repo. Do I have to create a setup.py (jfrog doc) file with all of the metadata for each package (there are dozens but I'll do it if it's the fastest way?). Appreciate any help!

https://redd.it/10976nh
@r_devops
Monitoring infra cost: which tool do you use?

Hey everyone,

To monitor the costs of your infrastructure, what tools do you use? Those provided by cloud providers (e.g. aws cost explorer), third-party services or a solution created by yourself?

FYI, I ask this question because we are building an open-source version for cloud monitoring and we are trying to understand what people are using today

https://redd.it/10999w4
@r_devops
Worthness of K8s when running application on single node cluster

I am tasked to rewrite an application developed in decade+ old technology. This application lags severely on some days to the extent that it is rendered unusable. I re-wrote some modules of the application in (react and spring boot) and demoed it running in minikube.

I was asked:

Q1. Why I need k8s? Cant I make it run multi-threaded to utilize full resources of the single node cluster instead of parallelizing it through k8s? Wont k8s on single slow the application down instead of making it more responsive?

Q2. What benefits k8s will bring on single node?

This is what I thought as a possible benefits of k8s on single node:

Answer to Q1.

- IBM research paper shows docker has very close performance to being native. Since, k8s use docker under the hood, they wont experience any significant performance overhead.
- When application is run in multiple containers on the single node, crash of one container will not affect other containers, which might not be the case with the single multi threaded process instance running directly on the bare metal.
- K8s auto recover / restart crashed containers
- Some code in any programming language may not be multi threaded out of the box. For example, in Java we have to explicitly implement multi threading with Thread class and java.util.concurrent package classes. With multiple containers running same application, we have full parallelism. (Note that famous frameworks like spring boot may do multi threading out of the box. But I am talking about multi threading all application code / every line of code.)

Answer to Q2.

- Easy environment configuration:
- There will be lesser bugs due to differences in the developer's local environment and production environment if we use k8s.
- (Considering there is a plan of adding more servers to the application in the future) Adding new container that exactly matches the environment will be very easy as compared to adding another machine that exactly matches the configuration.
- Also in case if we have more than one node in the cluster, it will be easy to do changes to containers by changing the corresponding configuration k8s YAML and redeploying the cluster against making changes to the servers (uninstalling / installing manually).

I have following doubts:

D1. Am I correct with above answers?

D2. Is there anything I missed?

D3. I feel answer to Q2 is sufficient. Also does the convenience of environment configuration management mean we should always go for containerization?

https://redd.it/109etr8
@r_devops
anyone take the devops online course from UChicago?

I'm trying to start my sibling down the devops career.

anyone have experience with this 8-week online course offered by UChicago?
DevOps | UChicago

&#x200B;

if not, whats do you recommend/suggest?

&#x200B;

thanks

https://redd.it/109fa13
@r_devops
Air travel across US thrown into chaos after computer outage

https://apnews.com/article/flight-delays-us-faa-updates-5805d15f520de8eadf52abb7b170487f

Anyone with knowledge of this NOTAM system care to share?

https://redd.it/109d3p8
@r_devops
What are your must-have scripts/playbooks for on-prem?

I’m currently working on a Terraform module to automate VMware Windows/Linux VM deployments. Possibly also reference an Ansible Playbook to join our domain and other time-consuming tasks.

What do you guys use to improve your lives tremendously when not using cloud?

https://redd.it/109jnee
@r_devops
Chef Workstation on Ubuntu 22.10

Can you install Chef Workstation on Ubuntu 22.10?

I can't seem to find anything on the net for it. Only thing I find for is Ubuntu 18.04 and possibly 20.04 but nothing for newer versions.

https://redd.it/109l34y
@r_devops
Propagating image changes to a k8s cluster

I have a CI loop in a repository that automatically builds and publishes a container on merge to the main branch. Usually, images are tagged per git hash, however, when they are proven stable, they are additionally tagged with latest.

The deployment for Kubernetes is pointing to the latest tag. How would I automate updating the Kubernetes deployment when a new image is tagged with latest? Or am I simply going about this the wrong way?

https://redd.it/109c7p2
@r_devops
Expensive Metrics: Why Your Monitoring Data and Bill Get Out Of Hand

Why does our metric data volume and our bill get out of control? How is it related to cardinality? And how can DevOps and SRE proactively manage it?
listing some cost factors to consider in this blog:

https://horovits.medium.com/expensive-metrics-why-your-monitoring-data-and-bill-get-out-of-hand-e5724619e3f1

https://redd.it/109dhqb
@r_devops
Ever Reach the Point Where Despite Using Containers You Still Get “Works on my Machine”

I’m on hour 3 of debugging a CI pipeline where it runs 100% of the time when calling my Molecule test directly. If I call it through pytest which we use to parallelize those tests, fails every time. I didn’t write the pipeline so mostly just reading code and the spaghetti of how it’s all wired up.

Just thought I’d seek commiseration and funny stories of still hitting the “works on my machine” wall despite using containers.

https://redd.it/109swkt
@r_devops