Reddit DevOps
271 subscribers
9 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
How do you debug an issue between varnish and Apache?

[centos@ip-172-35-25-65 ~]$ varnishlog
0 CLI - Rd ping
0 CLI - Wr 200 19 PONG 1635280998 1.0
0 CLI - Rd ping
0 CLI - Wr 200 19 PONG 1635281001 1.0
10 SessionOpen c 127.0.0.2 55870 127.0.0.2:80
10 ReqStart c 127.0.0.2 55870 894208400
10 RxRequest c GET
10 RxURL c /
10 RxProtocol c HTTP/1.0
10 RxHeader c X-Real-IP: 198.95.75.75
10 RxHeader c X-Forwarded-For: 198.95.75.75
10 RxHeader c X-Forwarded-Proto: https
10 RxHeader c X-Forwarded-Port: 80
10 RxHeader c Host: staging03.cherry.com
10 RxHeader c Connection: close
10 RxHeader c Cache-Control: max-age=0
10 RxHeader c Authorization: Basic aGc6am9objEyMw==
10 RxHeader c Upgrade-Insecure-Requests: 1
10 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36
10 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
10 RxHeader c Accept-Encoding: gzip, deflate
10 RxHeader c Accept-Language: en-US,en;q=0.9,fr;q=0.8
10 RxHeader c Cookie: ajs_anonymous_id=%22424f4cd9-cbbc-4ead-83b1-273cb21cf453%22; _fbp=fb.1.1630002144579.2012566540; __qca=P0-1416512434-1630002144589; _edwvts=708154457303700204; _gid=GA1.2.1572498662.1635275261; ajs_user_id=%224543534%40mimpi99.com%22; _gcl_au=1.1.
10 VCL_call c recv pass
10 VCL_call c hash
10 Hash c /
10 Hash c staging03.cherry.com
10 Hash c 80
10 Hash c ajs_anonymous_id=%22424f4cd9-cbbc-4ead-83b1-273cb21cf453%22; _fbp=fb.1.1630002144579.2012566540; __qca=P0-1416512434-1630002144589; _edwvts=708154457303700204; _gid=GA1.2.1572498662.1635275261; ajs_user_id=%224543534%40mimpi99.com%22; _gcl_au=1.1.1880042
10 VCL_return c hash
10 VCL_call c pass pass
10 FetchError c no backend connection
10 VCL_call c error deliver
10 VCL_call c deliver deliver
10 TxProtocol c HTTP/1.1
10 TxStatus c 503
10 TxResponse c Service Unavailable
10 TxHeader c Server: Varnish
10 TxHeader c Content-Type: text/html; charset=utf-8
10 TxHeader c Retry-After: 5
10 TxHeader c Content-Length: 392
10 TxHeader c Accept-Ranges: bytes
10 TxHeader c Date: Tue, 26 Oct 2021 20:43:23 GMT
10 TxHeader c X-Varnish: 894208400
10 TxHeader c Via: 1.1 varnish
10 TxHeader c Connection: close
10 TxHeader c X-Age: 0
10 TxHeader c X-Cache: MISS
10 Length c 392
10 ReqEnd c 894208400 1635281003.852778196 1635281003.852984428 0.000073195 0.000165701 0.000040531
10 SessionClose c error
10 StatSess c 127.0.0.2 55870 0 1 1 0 1 0 273 392
0 CLI - Rd ping
0 CLI - Wr 200 19 PONG 1635281004 1.0
0 CLI - Rd ping
0 CLI - Wr 200 19 PONG 1635281007 1.0
0 CLI - Rd ping
0 CLI - Wr 200 19 PONG 1635281010 1.0
0 CLI - Rd ping
0 CLI - Wr 200 19 PONG 1635281013 1.0


I tried to log what was happening when I got from the client side:


Error 503 Service Unavailable
Service Unavailable

Guru Meditation:
XID: 894208400

​

Now, I thought it was because of Apache not running, because when I close varnish I get a 502 gateway error from nginx. Anyway, I read the error
logs:

​

[Tue Oct 26 14:53:47 2021] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Tue Oct 26 14:53:47 2021] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Oct 26 14:53:47 2021] [notice] Digest: generating secret for digest authentication ...
[Tue Oct 26 14:53:47 2021] [notice] Digest: done
[Tue Oct 26 14:53:47 2021] [notice] FastCGI: process manager initialized (pid 23090)
[Tue Oct 26 14:53:47 2021] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fastcgi/2.4.6 configured -- resuming normal operations
[Tue Oct 26 14:53:52 2021] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /var/www/html/
[Tue Oct 26 14:53:52 2021] [error] [client 127.0.0.1] File does not exist: /var/www/html/favicon.ico, referer: https://staging03.hgreg.com/
[Tue Oct 26 15:01:21 2021] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /var/www/html/
[Tue Oct 26 15:01:42 2021] [notice] caught SIGTERM, shutting down
[Tue Oct 26 15:01:42 2021] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Tue Oct 26 15:01:42 2021] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Oct 26 15:01:42 2021] [notice] Digest: generating secret for digest authentication ...
[Tue Oct 26 15:01:42 2021] [notice] Digest: done
[Tue Oct 26 15:01:42 2021] [notice] FastCGI: process manager initialized (pid 23299)
[Tue Oct 26 15:01:42 2021] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fastcgi/2.4.6 configured -- resuming normal operations
[Tue Oct 26 15:11:56 2021] [notice] caught SIGTERM, shutting down


I saw SIGTERM, shutting down, so I thought maybe I should restart Apache and I did, but I get the same error, and no new logs in the error\_log.

​

[centos@ip-172-35-25-65 ~]$ sudo service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]
[centos@ip-172-35-25-65 ~]$ date
Tue Oct 26 17:12:32 EDT 2021
[centos@ip-172-35-25-65 ~]$

​

Now, I run a puppet config, but it didn't completely run, but I have the same files. So I am wondering what might be the issue. One of the Apache config file which is loaded since all files with conf are loaded is like this:

​

​

​

<VirtualHost *>
ServerName preprod.staging03.cherry.com



ServerAlias betacherry.staging03.cherry.com staging03.cherry.com



DocumentRoot /home/staging03/version/preprod.staging03.cherry.com
ServerAdmin [email protected]

SetEnv environment preprod
SetEnv project staging03

UseCanonicalName Off
#CustomLog /var/log/httpd/preprod.staging03.cherry.com_log combined
#CustomLog /var/log/httpd/preprod.staging03.cherry.com-bytes_log "%{%s}t %I .\n%{%s}t %O ."

## User cherry # Needed for Cpanel::ApacheConf
UserDir disabled
UserDir enabled staging03

#<IfModule mod_suphp.c>
# suPHP_UserGroup staging03 staging03
#</IfModule>

SuexecUserGroup staging03 staging03

<directory "/home/staging03/version">
AddHandler php5-fcgi .php
Action php5-fcgi /php5-fcgi-staging03
AllowOverride All


AuthType Basic
AuthName "staging03-preprod"
AuthUserFile "/etc/httpd/conf.d/htpasswd.staging03"
require valid-user

satisfy any
deny from all

Order deny,allow
SetEnvIf X-Hg-Internal-IP 1 HgInternalIP=1
Allow from
env=HgInternalIP

SetEnvIf User-Agent "Amazon CloudFront" AmazonCloudFront
Allow from env=AmazonCloudFront

SetEnvIf User-Agent "^(.*)Lighthouse(.*)$" Lighthouse=1
Allow from env=Lighthouse

</directory>
<IfModule concurrent_php.c>
php5_admin_value open_basedir "/home/staging03:/usr/lib/php:/usr/local/lib/php:/tmp"
</IfModule>
<IfModule !concurrent_php.c>
<IfModule mod_php5.c>
php_admin_value open_basedir "/home/staging03:/usr/lib/php:/usr/local/lib/php:/tmp"
</IfModule>
<IfModule sapi_apache2.c>
php_admin_value open_basedir "/home/staging03:/usr/lib/php:/usr/php4/lib/php:/usr/local/lib/php:/usr/local/php4/lib/php:/tmp"
</IfModule>
</IfModule>
<IfModule !mod_disable_suexec.c>
<IfModule !mod_ruid2.c>
SuexecUserGroup staging03 staging03
</IfModule>
</IfModule>
<IfModule mod_ruid2.c>
RMode config
RUidGid staging03 staging03
</IfModule>
<IfModule itk.c>
# For more information on MPM ITK, please read:
# https://mpm-itk.sesse.net/
AssignUserID staging03 staging03
</IfModule>
</VirtualHost>


So what files should I look at and how do I check it's not Apache that's the problem, because we have nginx routing to varnish and then routing to Apache, so I am thinking Apache is the problem, but I don't get any useful info from the log and Apache runs without any issue, it's just not servicing the page and Varnish can't reach Apache for some reason?

&#x200B;

I am running CENTOS 6, and I have another server with the same configurations that's running well, but when I diff the etc folder, I don't really see any significant difference.

&#x200B;

I am not sure what might be the problem here. I don't know if there's any other relevant log I can find and what I can do to test what might be wrong with Apache or varnish. I think it's not varnish the problem, because I got 503 errors before when Apache was not running properly. However, I am not sure how I can exactly find out since I don't see any error in the logs.

https://redd.it/qgn58m
@r_devops
Unpopular opinion: I was promised lightweight containers but I got yet another VM

So I started using Docker 2-3 years with the promise of replacing expensive VM with 'lightweight' containers that would not hog up my development machine by I just can't help it pointing out that every time I open up Activity Monitor or Task Manager, Docker is consistently eating 4-6 GB of RAM and consistently eating up my CPU and battery. Now I consider dropping Docker completely and just running the project from the IDE or CLI

What are your experiences?

https://redd.it/qgsg42
@r_devops
How far can you get with somebody else running sudo for you?

Hello,
we have this weird policy from global, where if we are using their VMs, they don't give us sudo permissions but if we need to run something as sudo, they will do it for us.

Of course I think its bullshit, but to my plan later, to comply with this and make sure I am not using sudo that much

1. can I run jenkins, docker, git under 1 service account, or only option is adding them to same user group?
2. I believe that for daemon scripts ran through systemctl, they need to be under system account and there is no way around it?
3. Is there anything else apart from yum installs where I will need heavily sudo on?

I will try to see how it goes, maybe they can give me part of the sudo permissions at least, but overall I want to annoy hell out of them with as reasonable sudo requests as possible. If that won't work, I can still go to my manager and ask for local VMs, but if possible we want to use global budget, not local.

&#x200B;

And who knows, maybe its possible to run in this restrained mode, after initial setup is done.

https://redd.it/qgqits
@r_devops
How do we switch from php-fpm to regular php in Apache?

How do we stop using php-fpm on Apache? I tried to get a server running problem is it's running on CENTOS 6 and puppet only works on CENTOS 6. I was trying to figure out why the server wasn't running after I copied the settings and then I did:


sudo netstat -plnt

&#x200B;

which showed me I was using php-fpm and on the server where it was working I wasn't running it. I was perplexed, because the httpd configs were the same last time I checked, so I am wondering how to switch from php-fpm to php to see if I can get Apache running on the new server. I turned it off:

&#x200B;

sudo service php-fpm stop

&#x200B;

But I am still getting 500 from varnish. I am not sure if I might have missed something in the httpd config, but the new and old server had pretty much the same configs.

&#x200B;

I have a php.conf:

&#x200B;

#
# PHP is an HTML-embedded scripting language which attempts to make it
# easy for developers to write dynamically generated webpages.
#
<IfModule prefork.c>
LoadModule php7module modules/libphp7.so
</IfModule>

<IfModule !prefork.c>
LoadModule php7
module modules/libphp7-zts.so
</IfModule>

#
# Cause the PHP interpreter to handle files with a .php extension.
#
AddHandler php7-script .php
AddType text/html .php

#
# Add index.php to the list of files that will be served as directory
# indexes.
#
DirectoryIndex index.php

#
# Uncomment the following line to allow PHP to pretty-print .phps
# files as PHP source code:
#
#AddType application/x-httpd-php-source .phps

#
# Apache specific PHP configuration options
# those can be override in each configured vhost
#
phpvalue session.savehandler "files"
phpvalue session.savepath "/var/lib/php/session"
phpvalue soap.wsdlcachedir "/var/lib/php/wsdlcache"

&
#x200B;

and a fast\
cgi conf:

&#x200B;

# WARNING: this is a kludge:
## The User/Group for httpd need to be set before we can load modfastcgi,
## but /etc/httpd/conf.d/fastcgi.conf on RHEL gets loaded before
## /etc/httpd/conf/httpd.conf, so we need to set them here :(
## mod
fcgid does not have this bug,
## but it does not handle child PHP processes appropriately per
## https://serverfault.com/questions/303535/a-single-php-fastcgi-process-blocks-all-other-php-requests/305093#305093
User apache
Group apache

LoadModule fastcgimodule modules/modfastcgi.so

# dir for IPC socket files

FastCgiIpcDir /var/run/modfastcgi

# wrap all fastcgi script calls in suexec

FastCgiWrapper Off

# global FastCgiConfig can be overridden by FastCgiServer options in vhost config

FastCgiConfig -idle-timeout 120 -maxClassProcesses 1

# sample PHP config
# see /usr/share/doc/mod
fastcgi-2.4.6 for php-wrapper script
# don't forget to disable modphp in /etc/httpd/conf.d/php.conf!
#
# to enable privilege separation, add a "SuexecUserGroup" directive
# and chown the php-wrapper script and parent directory accordingly
# see also
https://www.brandonturner.net/blog/2009/07/fastcgiwithphpopcodecache/
#
#FastCgiServer /var/www/cgi-bin/php-wrapper
#AddHandler php-fastcgi .php
#Action php-fastcgi /cgi-bin/php-wrapper
#AddType application/x-httpd-php .php
#DirectoryIndex index.php
#
#<Location /cgi-bin/php-wrapper>
# Order Deny,Allow
# Deny from All
# Allow from env=REDIRECT
STATUS
# Options ExecCGI
# SetHandler fastcgi-script
#</Location>


&#x200B;

and a fcgi conf:

&#x200B;
How do we switch from php-fpm to regular php in Apache?

How do we stop using php-fpm on Apache? I tried to get a server running problem is it's running on CENTOS 6 and puppet only works on CENTOS 6. I was trying to figure out why the server wasn't running after I copied the settings and then I did:


sudo netstat -plnt

&#x200B;

which showed me I was using php-fpm and on the server where it was working I wasn't running it. I was perplexed, because the httpd configs were the same last time I checked, so I am wondering how to switch from php-fpm to php to see if I can get Apache running on the new server. I turned it off:

&#x200B;

sudo service php-fpm stop

&#x200B;

But I am still getting 500 from varnish. I am not sure if I might have missed something in the httpd config, but the new and old server had pretty much the same configs.

&#x200B;

I have a php.conf:

&#x200B;

#
# PHP is an HTML-embedded scripting language which attempts to make it
# easy for developers to write dynamically generated webpages.
#
<IfModule prefork.c>
LoadModule php7_module modules/libphp7.so
</IfModule>

<IfModule !prefork.c>
LoadModule php7_module modules/libphp7-zts.so
</IfModule>

#
# Cause the PHP interpreter to handle files with a .php extension.
#
AddHandler php7-script .php
AddType text/html .php

#
# Add index.php to the list of files that will be served as directory
# indexes.
#
DirectoryIndex index.php

#
# Uncomment the following line to allow PHP to pretty-print .phps
# files as PHP source code:
#
#AddType application/x-httpd-php-source .phps

#
# Apache specific PHP configuration options
# those can be override in each configured vhost
#
php_value session.save_handler "files"
php_value session.save_path "/var/lib/php/session"
php_value soap.wsdl_cache_dir "/var/lib/php/wsdlcache"

&#x200B;

and a fast\_cgi conf:

&#x200B;

# WARNING: this is a kludge:
## The User/Group for httpd need to be set before we can load mod_fastcgi,
## but /etc/httpd/conf.d/fastcgi.conf on RHEL gets loaded before
## /etc/httpd/conf/httpd.conf, so we need to set them here :(
## mod_fcgid does not have this bug,
## but it does not handle child PHP processes appropriately per
## https://serverfault.com/questions/303535/a-single-php-fastcgi-process-blocks-all-other-php-requests/305093#305093
User apache
Group apache

LoadModule fastcgi_module modules/mod_fastcgi.so

# dir for IPC socket files

FastCgiIpcDir /var/run/mod_fastcgi

# wrap all fastcgi script calls in suexec

FastCgiWrapper Off

# global FastCgiConfig can be overridden by FastCgiServer options in vhost config

FastCgiConfig -idle-timeout 120 -maxClassProcesses 1

# sample PHP config
# see /usr/share/doc/mod_fastcgi-2.4.6 for php-wrapper script
# don't forget to disable mod_php in /etc/httpd/conf.d/php.conf!
#
# to enable privilege separation, add a "SuexecUserGroup" directive
# and chown the php-wrapper script and parent directory accordingly
# see also https://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/
#
#FastCgiServer /var/www/cgi-bin/php-wrapper
#AddHandler php-fastcgi .php
#Action php-fastcgi /cgi-bin/php-wrapper
#AddType application/x-httpd-php .php
#DirectoryIndex index.php
#
#<Location /cgi-bin/php-wrapper>
# Order Deny,Allow
# Deny from All
# Allow from env=REDIRECT_STATUS
# Options ExecCGI
# SetHandler fastcgi-script
#</Location>


&#x200B;

and a fcgi conf:

&#x200B;
<IfModule mod_fastcgi.c>
Alias /php5-fcgi-staging03 /usr/lib/cgi-bin/php5-fcgi-staging03
FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi-staging03 -socket /var/run/php-fpm/php5-fcgi-staging03.sock -pass-header Authorization -idle-timeout 300
</IfModule>


Commenting out php5-fcgi-staging03 gives me:

&#x200B;

Not Found
The requested URL /php5-fcgi-staging03/index.php was not found on this server.

https://redd.it/qgz2l2
@r_devops
Can AWS or Cloudflare block traffic from URLs containing certain URL parameters?

We want to block a user if he's coming to our website from

example.com?referer=bar

We want to allow any other referrer to access our website, the only referer that should be blocked is "bar".

The user shouldn't even be able to see our website.

If we block the user from the frontend, he might be able to manipulate Javascript. We can't block from the backend, since we have a static react JS application, which would need to call a PHP API, so again a hacker can find that API call and manipulate it. We don't use server-side rendering.

Ideally, this should be done through a firewall, we use AWS and Cloudflare, do any of them have such capability?

https://redd.it/qgytlw
@r_devops
Which resume service do you use?

Experienced devops folks, which resume template do you use?

https://redd.it/qgwy7n
@r_devops
Release management strategy - nothing working…

We are working with a low code tool where each standalone web page is deployed independently of others. It is pretty advanced despite being low-code; we have several software engineers doing this stuff.

Then we have our user stories. Within the user story card, we have the affected page list. When a story is marked ready for production, I as the release manager make sure these pages are deployed to production. (Working on automation but we are just trying to tread water at this point - no one wants to slow development to actually establish a good process.)

The problem sometimes is that two stories may have the same page(s) listed in the page list. If just one of the stories is “ready for production”, I move the page(s) accordingly.

Then I get analysts complaining to me the next day that the page shouldn’t have moved, even though another analyst validated a different story that requires the page and marked it ready for production.

Basically, the core issue is the many-to-many relationship between stories and coded pages.

Any ideas?

https://redd.it/qgzmr0
@r_devops
New Podcast "DevOps Domination" - All things DevOps, Software, and Infrastructure!

Hey everyone! 👋

I recently launched a podcast about DevOps, and wanted to get some feedback. I only have one episode up but give it a listen and let me know what you think! Description follows:

>All the juicy details of being a DevOps/Site Reliability Engineer for large enterprises at your eardrums!

This is my first podcast and I don't have a proper microphone, and perhaps I say "um" too much, but if you can forgive all that, you might learn something interesting!

Link to the first episode on Spotify: https://open.spotify.com/episode/2G7xysNexS5KskJ11FEkMo?si=CpTLcb1rR\_C-0YY-vx2YFA

If other people use different podcast hosts, let me know and I'll try to publish it there as well.

Looking forward to your feedback and suggestions for future episodes!

https://redd.it/qh5kg6
@r_devops
Curling apache gives 401 and varnish gets 500 from Apache

[centos@staging03 ~]$ sudo netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN 3600/httpd
tcp 0 0 127.0.0.2:80 0.0.0.0:* LISTEN 1574/varnishd
tcp 0 0 172.31.22.60:80 0.0.0.0:* LISTEN 1539/nginx
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1251/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1501/master
tcp 0 0 127.0.0.1:443 0.0.0.0:* LISTEN 3600/httpd
tcp 0 0 127.0.0.1:6082 0.0.0.0:* LISTEN 1573/varnishd
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 3468/php-fpm
tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 1229/memcached
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 1061/redis-server 1
tcp 0 0 :::22 :::* LISTEN 1251/sshd
tcp 0 0 :::3306 :::* LISTEN 1383/mysqld

&#x200B;

I checked to investigate what's the issue with my server, and when I did:

&#x200B;

curl [127.0.0.1:80](https://127.0.0.1:80)

&#x200B;

I got:

&#x200B;

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>401 Authorization Required</title>
</head><body>
<h1>Authorization Required</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
<hr>
<address>Apache/2.2.15 (CentOS) Server at 127.0.0.1 Port 80</address>
</body></html>

&#x200B;

On a different server where everything is working, I get a blank response. So I am thinking this is why I am getting a 500 varnish error from Apache.

&#x200B;

In the Apache log, I didn't really get anything when I curled, but before that I got:

&#x200B;

[Wed Oct 27 17:02:25 2021] [notice] caught SIGTERM, shutting down
[Wed Oct 27 17:02:25 2021] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Oct 27 17:02:25 2021] [notice] Digest: generating secret for digest authentication ...
[Wed Oct 27 17:02:25 2021] [notice] Digest: done
[Wed Oct 27 17:02:25 2021] [notice] FastCGI: process manager initialized (pid 3602)
[Wed Oct 27 17:02:25 2021] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fastcgi/2.4.6 configured -- resuming normal operations

&#x200B;

So it seems FastCGI is properly configured and the issue I am getting from Apache is an authentication issue strangely enough. Is there anything else I can do to pin point what the problem is?

&#x200B;

Varnish gives the following:


12 TxHeader b X-Varnish: 1537309960
12 RxProtocol b HTTP/1.1
12 RxStatus b 500
12 RxResponse b Internal Server Error
12 RxHeader b Date: Wed, 27 Oct 2021 21:14:18 GMT
12 RxHeader b Server: Apache/2.2.15 (CentOS)
12 RxHeader b Expires: Wed, 11 Jan 1984 05:00:00 GMT
12 RxHeader b Cache-Control: no-cache, must-revalidate, max-age=0

&#x200B;

However, I have no way of checking what the 500 Internal Server Error is, because the error logs for php seems to be empty. One thing I noticed is that when I reboot and don't start Apache I get a
Storing secrets

Currently, all our passwords, API keys, etc are embedded in our code.

I started looking online but it just got me into a huge rabbit hole as there are just so many suggestions and considerations to make.

One of the things I saw more often is the use of vaults like Hashicorp/Ansible vault. I know a bit about Ansible Vault and I recall it requiring inputting the vault password to retrieve the encrypted secrets.

This can be a problem since the code has to use the secrets as well.

Even if the code would be able to input the vault password, isn't it a problem by itself? Since that vault password would then be seen in the code? What approach should I consider? How/what components should interact with each other? Would it have operations considerations like monitoring, backups, deployments, etc?

Thanks ahead!

https://redd.it/qh5be6
@r_devops
I am currently a Cyber Security Analyst who is being offered a free 14 week devops course. Should i take it?

I value my time. I don’t want to take it just because its free. I don’t know a whole lot about devops or devsecops. Could you please tell me how it would be useful for me to take the course in terms of my career.

Edit:
It suppose to Prepare you for in-demand DevOps engineer type roles with hands-on experience maintaining application infrastructure through deployment, provisioning, configuration management and monitoring. You'll also learn about cloud technologies available for DevOps operations

https://redd.it/qhc0q0
@r_devops
Free parallel jobs, stages, pipelines?

I've been working with Azure DevOps Pipelines and was looking at implementing parallel pipelines. Pipelines that would run concurrently for my microservices:

* /
* /admin
* /api

Apparently, this is not possible with Microsoft-hosted agent pools--I'm looking into self-hosted. Azure DevOps Pipelines supports parallel Jobs, which would run $40/mo per parallel job. So under a Stage, I would need a Job for each of the micro services.

Before dropping that, I was curious what, if any, other pipeline (GitHub Actions, Gitlab CI/CD, etc.) might offer free parallel pipelines, stages, jobs, etc. or if this is a pretty common practice.

https://redd.it/qh6aq5
@r_devops
How to iterate and dynamically add the nic to loadbalancer in azure using terraform

Hi all ,

I am trying to get the data source of network interface id dynamically as a variable and i want the code to iterate through the list and it should add itself to the backendpool association . I am able think of the below logic but am facing error like count can be used only when count argument is set , Please help me code is below

&#x200B;

```

data "azurerm_network_interface" "example" {

count = length(var.nic_name)

name = var.nic_name[count.index\]

resource_group_name = "networking"

}

&#x200B;

resource "azurerm_network_interface_backend_address_pool_association" "example"

{

network_interface_id = data.azurerm_network_interface.example[count.index\].id

ip_configuration_name = data.azurerm_network_interface.example[count.index\].ip_configuration[0\].name

backend_address_pool_id = azurerm_lb_backend_address_pool.example.id

}

&#x200B;

variable "nic_name" {

type = list

default = [ "somenic1","somenic2"\]

}

```

https://redd.it/qh5a81
@r_devops
Being afraid of asking a technical question to my coworkers

I have been hired (mid DevOps) along with 2 juniors at the same time. Currently, I am assigned to one project with one of them and one senior dev. We have some K8s to deploy in AWS.

The second junior has been fired after failing one project where his senior was angry at him all the time just because he was asking a lot of technical questions.

Now I am afraid that I will be fired too even if I have some troubles on my way.

Is it a normal state of the company? Previously I was in a company where everyone was learning from everyone and asking a question was a common thing.

https://redd.it/qhiaue
@r_devops
Future proofing... Realistic things I can do, without having programming experience?

So, Systems Engineer here focusing on networks and voice.

The writing is on the wall for my role (not straight away, probably 5 years or so) as it is now, and will be absorbed into a virtual and automated platform....

I have a looonnggg time left in my career, so what can I do to future proof myself? Bearing in mind these skills will only be built in a "non work" environment, as I currently don't deal with virtualisation or automation. I just don't want to be in a position where I've been de-skilled by technology...

I've done a year each in college of C, C++ and Java, but that was over 10 years ago and never professionally.

I can drive Linux for what I need it to do at the moment, can do basic scripting, have a basic understanding of Python.

I am almost a CCNP also.

What can I be doing to cultivate DevOPS skills, that I can present in an interview, that isn't seen as personal time "fluff". I'm almost certain I won't get experience in DevOPS skills for a year or 2 at least in my current role.

The Cisco Devnet exams seem like a logical choice, but still isn't real life examples.

https://redd.it/qhj5c9
@r_devops
Not allowed to have a cross-functional DevOps team

My team are now DevOps and have been for around 2 years. It has been a transition from the "old" model where Software Engineers developed the software and another team handled infrastructure and operations. We've now moved our systems from on-premise to the cloud and the team are supposed to be DevOps Software Engineers doing both development and operations, including creating, deploying, operating the virtualised infrastructure.

We are a team full of Software Engineers with no prior "Operations" experience and we're getting tired of just doing Operations and not enough Development. We had the idea to ask our line manager to put someone in our team with an Operations focus to join the team to make it cross-functional so the "Software Engineers" could get a satisfying diet of development and the Operations expert could get a satisfying diet of Operations.

My line manager has declined the request saying that our company has a policy of not having cross-functional teams. We are not allowed to have cross-functional DevOps teams in our company supposedly. Instead, every engineer must do both Dev & Ops.

What are your thoughts on this situation and what are your thoughts on advantages/disadvantages of having a cross-functional team so that developers can get their fair share of satisfying "coding"? :-)

https://redd.it/qhk2d7
@r_devops