Vulnerability Management and more
2.84K subscribers
900 photos
11 videos
5 files
874 links
Vulnerability assessment, IT compliance management, security automation.
Russian channel: @avleonovrus
Russial live news channel: @avleonovlive
PM @leonov_av
Download Telegram
To say the truth, all attempts to optimize the installation of security updates ("patch this, it's critical; don't patch this - it's not"), including the latest Predictive Prioritization concept, seem quite weak and awkward if you keep in mind how little we know about all exasting vulnerabilities. There are so many uncertainties in this area, so much information is hidden. In fact, it's a miracle that sometimes information about some vulnerability comes to the software vendor, which confirms it and releases the patch with a very short description. Most likely, this vulnerability has been already exploited in a wild for a long time before this lucky moment, but still.
If the vulnerability is high/critical, the software vendor usually recommends to install the updates immediately. The regulators, such as the PCI Counsil, also recommend to do this. What is the most strange and unhealthy is that nowadays big Enterprises (basically their IT and business departments) in mass are not ready to implement these recommendations as is: to track security updates from the vendor constantly and install them all on each and every host. Simply because it takes too much time and efforts. What they really want from the Vulnerability Management is an excuse, an indulgence for the disobeying. They want somebody to show them why each vulnerability is not critical in "reality" even if it is marked as critical by the software vendor. And they want it to be done based on some incomplete and contradictory data.
Don't get me wrong, I stand for more practical Vulnerability Management processes that propose real attack scenarios. But when we talk about vulnerability remediation, the best strategy, IMHO, is to install all the patches and make the configuration, just as vendors recommends. And do it constantly and proactively. I don't have to scan a server that wasn't patched in two months to say that it has high and critical vulnerabilities. It's just a natural thing. And, after all, it's IT and business, who choose these IT vendors and the solutions, not Security guys, right?

The Vulnerability Management process should demonstrate that everything is fine in the company and alarm if something goes wrong with some IT processes. NOT to push the patching process all the time! (Well, ideally)
The Russians Did... Oh, not this time. 🙂 "A spokesperson for Turning Point USA, meanwhile, told the paper he was stumped as to the origins of the image, characterizing it as “a last-minute A/V mistake”... a staffer, who has been fired, stumbled on the image in an online search and used it in error. “I don’t think it was malicious intent”..." #friday #fun
Well, after #Immunity Inc. recently released a video of #BLUEKEEP RDP RCE exploitation in #CANVAS 7.23 with getting a shell, I have no doubt anymore, that this vulnerability will be used in real attacks. 🙂 However, the exploitation process seems complicated and not very reliable with high chances to crash the target system. Currently, CANVAS supports only Windows 7 SP1 as a target, but they promise to add other vulnerable versions in further updates. #cve20190708 #exploit
When the ideal scenario for Vulnerability Management is not possible, there are other options. Business sets the priority; every whim for your money. 🙂
Let's say we can't update every host in the organization regularly and fully. Fair point, it requires a lot of resources.

1. We can play with the scope and regularly update at least the most critical hosts (this mean that we need excellent Asset Management and clear criteria for critical hosts, data and processes) and isolate them from other organization as much as possible.

We don't have resources to install all the updates at least on these most critical hosts?

2. Ok, let's go further and play with criteria for the most critical vulnerabilities and fix these vulnerabilities first. This is much more complicated task, because information about vulnerabilities is partial, subjective and confusing. Can you say that a vulnerability is critical and will be exploited soon just by reading a short description? Especially if you are not very familiar with the particular software and how it is used in the organization. I can't. And there is no magical neural network that could do this. This is a form of future telling.

Still no way to fix even these most critical vulnerabilities on a limited scope of hosts?

3. Ok, I get it. Let's think about workarounds, compensation mechanisms, advanced monitoring and isolation. At least it will make the exploitation harder.

Vulnerability Management can be very flexible!
This is most likely a #slowpoke news, but I just found out that Tenable .audit files with formalized Compliance Management checks are publicly available and can be downloaded without any registration. 😳🤩 However, you must accept the looooong license agreement.

So, I have two (completely theoretical!) questions 🤔:

1) What if someone supports the .audit format in some compliance management tool and gives the end user an ability to use this content by #Tenable to asses their systems? Will it be fair and legal?

2) What if someone uses this content as a source of inspiration for his own content, for example, in a form of #OVAL / #SCAP or some scripts? Will it be fair and legal?
Well, continuing the last topic. Each Tenable .audit script contains the header "script is released under the Tenable Subscription License" with reference to NESSUS® SOFTWARE LICENSE AND SUBSCRIPTION AGREEMENT. This document was last updated on 12.08.17. For example, there is still Nessus Home and not Nessus Essentials.

The document does not mention .audit scripts directly. But it mentions "Plugins". Maybe by plugins they mean only Nasl plugins, maybe also .audit. In any case, .audit fils should be considered as part of the "Licensed Materials".

Honestly, I did not find a single point prohibiting the use of these files (as is), as an input for tools that were not made by Tenable. Maybe only the general limitation in "5. Intellectual Property.": "Your rights with respect to the Licensed Materials are limited to the right to use the Licensed Materials pursuant to the terms and conditions in this Agreement. Any rights in or to the Licensed Materials (including rights of use) not expressly granted in this Agreement are reserved by Tenable". So, for me it seems a gray zone.

Speaking about the use of Tenable .audit files to make other forms of security content, I found the most interesting limitations in "6. No Reverse Engineering, Other Restrictions". "You may not directly or indirectly: [...] translate or create derivative works of all or any part of the Licensed Materials". When you convert .audit files to some other form, it will probably create a derivative work. However, it's unclear how this combines with the fact that .audit files are often based on publically available documents or documents that are the intellectual property of third parties, such as Center for Internet Security.

In any case, it seems that getting the checks from Tenable .audit files can cause problems and it's better to avoid this. Especially if you work for a security vendor or service provider, because "You may not use the if You are, or You work for, a competitor of Tenable’s in the network security software industry. For the avoidance of doubt, You may not include or redistribute the Licensed Materials on physical or virtual appliances to perform on-site scans."

There is also a great section "3(c). Custom Nessus Plugin Development and Distribution". "Tenable allows users to write and develop new Nessus plugins; however, You must have an active Nessus subscription in order to add plugins to Your Nessus scanner". It's obviously about Nasl scripts and there are the restriction on public distribution of custom plugins that use some APIs and ".inc" libraries. But if .audit scripts are legally "plugins", you can create your own custom content in such form and use such files in any tools, if this makes sense.

Upd. Saved this to my blog.
I recently figured out how to work with Microsoft Active Directory using Python 3. I wanted to get a hierarchy of Organizational Units (OUs) and all the network hosts associated with these OUs to search for possible anomalies.

Some code examples are in my blog: https://avleonov.com/2019/08/12/how-to-get-the-organization-units-ou-and-hosts-from-microsoft-active-directory-using-python-ldap3/

#API #AssetManagement #ActiveDirectory #AD #BeyondTrust #LDAP #ldap3 #Microsoft #MicrosoftADExplorer #OU #PowerShell #python #python3
This time Patch Tuesday is quite interesting. Two RCEs in Remote Desktop Services (RDS) - Microsoft's implementation of thin client architecture, where Windows software, and the entire desktop of the computer running RDS, are made accessible to any remote client machine that supports Remote Desktop Protocol (RDP). ^wiki

All current Windows versions are affected:

"The affected versions of Windows are Windows 7 SP1, Windows Server 2008 R2 SP1, Windows Server 2012, Windows 8.1, Windows Server 2012 R2, and all supported versions of Windows 10, including server versions.

Windows XP, Windows Server 2003, and Windows Server 2008 are not affected, nor is the Remote Desktop Protocol (RDP) itself affected."

"There is partial mitigation on affected systems that have Network Level Authentication (NLA) enabled."

No information about the exploits yet.

Upd. #DejaBlue is an awesome name 😅
Continuing the Vulnerability Management topic. The first part was how the VM (and Patch Management) process should ideally work, the second was about possible compromises. This one will be about the right mindset and staying focused.

IMHO, all the flexibility of the VM process makes sense ONLY if there are no better options. It's critically important to articulate that the situation in which it's necessary to ignore the requirements of the software vendors and regulators is NOT normal. If an organization can only function in this way, it's someone's fault. And this is certainly not the fault of the IT security guy who has to audit all this mess.

Let's say that there is a monstrous business application that only works with some specific outdated version of Java, and it is impossible to rewrite this application to use the new version or even to test how the app will work with the updated version. Well, doesn't that simply mean the initial decision to use Java was bad?
Does the person who made this decision realize all the circumstances? In fact, we can ask the simmilar questions about all vulnerable systems that cannot be easily updated. There will always be someone's poor decisions.

It's important to constantly clarify that Vulnerability Management guys (and the entire IT Security Team) are doing the best in the given situation, caused by bad decisions of other people from IT and Business, and to keep in mind how the things should be done right. Otherwise, there is a huge risk of being stuck in the "Stockholm syndrome", and Vulnerability Management in the organization process will become a complete profanation.
#Zbrunk project (github) began almost like a joke. And in a way it is. 😜 In short, my friends and I decided to make an open-source (MIT license) tool, which will be a kind of alternative to #Splunk for some specific tasks. So, it will be possible to:

* Put structured JSON events in Zbrunk using http collector API
* Get the events from Zbrunk using http search API
* Make information panels based on these search requests and place them on dashboards

Why is it necessary? Well, I've worked a lot with Splunk in recent years. I like the main concepts, and I think working with the events is a very effective and natural way of processing and presenting data. But for my tasks (Asset Management, Compliance Management, Vulnerability Management) with several hundred megabytes of raw data per day to process and dashboards that need to be updated once or several times a day Splunk felt like an overkill. You really don't need such performance for these tasks.
And, considering the price, it only makes sense if your organization already uses Splunk for other tasks. After Splunk decision to leave Russian market, this became even more obvious, so many people began to look for alternatives for possible and, as far as possible, painless migration.

We are realistic, the performance and search capabilities of Zbrunk will be MUCH worse. It's impossible to make such universal and effective solution as a pet project without any resources. So, don't expect something that will process terabytes of logs in near real time, the goal is completely different. But if you want same basic tool to make dashboards, it worth a try. 🙂

Now, after first weekend of coding and planning it's possible to send events to Zbrunk just like you do it using the Splunk HTTP Event Collector and they appear in MongoDB:

$ echo -e '{"time":"1471613579", "host":"test_host", "event":{"test_key":"test_line1"}}\n{"time":"1471613580", "host":"test_host", "event":{"test_key":"test_line2"}}' > temp_data
$ curl -k https://127.0.0.1:8088/services/collector -H 'Authorization: Zbrunk 8DEE8A67-7700-4BA7-8CBF-4B917CE2352B' -d @temp_data
{"text": "Success", "code": 0}

In Mongo:

> db.events.find()
{ "_id" : ObjectId("5d62d7061600085d80bb1ea8"), "time" : "1471613579", "host" : "test_host", "event" : { "test_key" : "test_line1" }, "event_type" : "test_event" }
{ "_id" : ObjectId("5d62d7061600085d80bb1ea9"), "time" : "1471613580", "host" : "test_host", "event" : { "test_key" : "test_line2" }, "event_type" : "test_event" }

Thus, it will be very easy to use your existing custom connectors if you already have some. The next step is to make basic http search API, prepare dashboard data using these search requests and somehow show these dashboards, for example, in Grafana. Stay tuned and welcome to participate. 😉
zbrunk_madskillz.jpg
If you can implement a Vulnerability Management process in your organization and, at the same time, build and maintain good relationships with the IT team, it's really awesome. This is the only way to make this process work effectively.

But don’t you think that you will be able to keep such good relationships in the case of a major incident caused by some vulnerability on some host? We can again recall the Equifax case, but it can be anything.

Hah. The chances are great that in this case there will be a mass fighting and everyone will try to save their job and face, no matter what. It will be very convenient and natural to make a VM specialist the main or even the only responsible person for all this trouble.