Vulnerability Management and more
2.83K subscribers
900 photos
11 videos
5 files
874 links
Vulnerability assessment, IT compliance management, security automation.
Russian channel: @avleonovrus
Russial live news channel: @avleonovlive
PM @leonov_av
Download Telegram
This is most likely a #slowpoke news, but I just found out that Tenable .audit files with formalized Compliance Management checks are publicly available and can be downloaded without any registration. ๐Ÿ˜ณ๐Ÿคฉ However, you must accept the looooong license agreement.

So, I have two (completely theoretical!) questions ๐Ÿค”:

1) What if someone supports the .audit format in some compliance management tool and gives the end user an ability to use this content by #Tenable to asses their systems? Will it be fair and legal?

2) What if someone uses this content as a source of inspiration for his own content, for example, in a form of #OVAL / #SCAP or some scripts? Will it be fair and legal?
Well, continuing the last topic. Each Tenable .audit script contains the header "script is released under the Tenable Subscription License" with reference to NESSUSยฎ SOFTWARE LICENSE AND SUBSCRIPTION AGREEMENT. This document was last updated on 12.08.17. For example, there is still Nessus Home and not Nessus Essentials.

The document does not mention .audit scripts directly. But it mentions "Plugins". Maybe by plugins they mean only Nasl plugins, maybe also .audit. In any case, .audit fils should be considered as part of the "Licensed Materials".

Honestly, I did not find a single point prohibiting the use of these files (as is), as an input for tools that were not made by Tenable. Maybe only the general limitation in "5. Intellectual Property.": "Your rights with respect to the Licensed Materials are limited to the right to use the Licensed Materials pursuant to the terms and conditions in this Agreement. Any rights in or to the Licensed Materials (including rights of use) not expressly granted in this Agreement are reserved by Tenable". So, for me it seems a gray zone.

Speaking about the use of Tenable .audit files to make other forms of security content, I found the most interesting limitations in "6. No Reverse Engineering, Other Restrictions". "You may not directly or indirectly: [...] translate or create derivative works of all or any part of the Licensed Materials". When you convert .audit files to some other form, it will probably create a derivative work. However, it's unclear how this combines with the fact that .audit files are often based on publically available documents or documents that are the intellectual property of third parties, such as Center for Internet Security.

In any case, it seems that getting the checks from Tenable .audit files can cause problems and it's better to avoid this. Especially if you work for a security vendor or service provider, because "You may not use the if You are, or You work for, a competitor of Tenableโ€™s in the network security software industry. For the avoidance of doubt, You may not include or redistribute the Licensed Materials on physical or virtual appliances to perform on-site scans."

There is also a great section "3(c). Custom Nessus Plugin Development and Distribution". "Tenable allows users to write and develop new Nessus plugins; however, You must have an active Nessus subscription in order to add plugins to Your Nessus scanner". It's obviously about Nasl scripts and there are the restriction on public distribution of custom plugins that use some APIs and ".inc" libraries. But if .audit scripts are legally "plugins", you can create your own custom content in such form and use such files in any tools, if this makes sense.

Upd. Saved this to my blog.
I recently figured out how to work with Microsoft Active Directory using Python 3. I wanted to get a hierarchy of Organizational Units (OUs) and all the network hosts associated with these OUs to search for possible anomalies.

Some code examples are in my blog: https://avleonov.com/2019/08/12/how-to-get-the-organization-units-ou-and-hosts-from-microsoft-active-directory-using-python-ldap3/

#API #AssetManagement #ActiveDirectory #AD #BeyondTrust #LDAP #ldap3 #Microsoft #MicrosoftADExplorer #OU #PowerShell #python #python3
This time Patch Tuesday is quite interesting. Two RCEs in Remote Desktop Services (RDS) - Microsoft's implementation of thin client architecture, where Windows software, and the entire desktop of the computer running RDS, are made accessible to any remote client machine that supports Remote Desktop Protocol (RDP). ^wiki

All current Windows versions are affected:

"The affected versions of Windows are Windows 7 SP1, Windows Server 2008 R2 SP1, Windows Server 2012, Windows 8.1, Windows Server 2012 R2, and all supported versions of Windows 10, including server versions.

Windows XP, Windows Server 2003, and Windows Server 2008 are not affected, nor is the Remote Desktop Protocol (RDP) itself affected."

"There is partial mitigation on affected systems that have Network Level Authentication (NLA) enabled."

No information about the exploits yet.

Upd. #DejaBlue is an awesome name ๐Ÿ˜…
Continuing the Vulnerability Management topic. The first part was how the VM (and Patch Management) process should ideally work, the second was about possible compromises. This one will be about the right mindset and staying focused.

IMHO, all the flexibility of the VM process makes sense ONLY if there are no better options. It's critically important to articulate that the situation in which it's necessary to ignore the requirements of the software vendors and regulators is NOT normal. If an organization can only function in this way, it's someone's fault. And this is certainly not the fault of the IT security guy who has to audit all this mess.

Let's say that there is a monstrous business application that only works with some specific outdated version of Java, and it is impossible to rewrite this application to use the new version or even to test how the app will work with the updated version. Well, doesn't that simply mean the initial decision to use Java was bad?
Does the person who made this decision realize all the circumstances? In fact, we can ask the simmilar questions about all vulnerable systems that cannot be easily updated. There will always be someone's poor decisions.

It's important to constantly clarify that Vulnerability Management guys (and the entire IT Security Team) are doing the best in the given situation, caused by bad decisions of other people from IT and Business, and to keep in mind how the things should be done right. Otherwise, there is a huge risk of being stuck in the "Stockholm syndrome", and Vulnerability Management in the organization process will become a complete profanation.
#Zbrunk project (github) began almost like a joke. And in a way it is. ๐Ÿ˜œ In short, my friends and I decided to make an open-source (MIT license) tool, which will be a kind of alternative to #Splunk for some specific tasks. So, it will be possible to:

* Put structured JSON events in Zbrunk using http collector API
* Get the events from Zbrunk using http search API
* Make information panels based on these search requests and place them on dashboards

Why is it necessary? Well, I've worked a lot with Splunk in recent years. I like the main concepts, and I think working with the events is a very effective and natural way of processing and presenting data. But for my tasks (Asset Management, Compliance Management, Vulnerability Management) with several hundred megabytes of raw data per day to process and dashboards that need to be updated once or several times a day Splunk felt like an overkill. You really don't need such performance for these tasks.
And, considering the price, it only makes sense if your organization already uses Splunk for other tasks. After Splunk decision to leave Russian market, this became even more obvious, so many people began to look for alternatives for possible and, as far as possible, painless migration.

We are realistic, the performance and search capabilities of Zbrunk will be MUCH worse. It's impossible to make such universal and effective solution as a pet project without any resources. So, don't expect something that will process terabytes of logs in near real time, the goal is completely different. But if you want same basic tool to make dashboards, it worth a try. ๐Ÿ™‚

Now, after first weekend of coding and planning it's possible to send events to Zbrunk just like you do it using the Splunk HTTP Event Collector and they appear in MongoDB:

$ echo -e '{"time":"1471613579", "host":"test_host", "event":{"test_key":"test_line1"}}\n{"time":"1471613580", "host":"test_host", "event":{"test_key":"test_line2"}}' > temp_data
$ curl -k https://127.0.0.1:8088/services/collector -H 'Authorization: Zbrunk 8DEE8A67-7700-4BA7-8CBF-4B917CE2352B' -d @temp_data
{"text": "Success", "code": 0}

In Mongo:

> db.events.find()
{ "_id" : ObjectId("5d62d7061600085d80bb1ea8"), "time" : "1471613579", "host" : "test_host", "event" : { "test_key" : "test_line1" }, "event_type" : "test_event" }
{ "_id" : ObjectId("5d62d7061600085d80bb1ea9"), "time" : "1471613580", "host" : "test_host", "event" : { "test_key" : "test_line2" }, "event_type" : "test_event" }

Thus, it will be very easy to use your existing custom connectors if you already have some. The next step is to make basic http search API, prepare dashboard data using these search requests and somehow show these dashboards, for example, in Grafana. Stay tuned and welcome to participate. ๐Ÿ˜‰
zbrunk_madskillz.jpg
If you can implement a Vulnerability Management process in your organization and, at the same time, build and maintain good relationships with the IT team, it's really awesome. This is the only way to make this process work effectively.

But donโ€™t you think that you will be able to keep such good relationships in the case of a major incident caused by some vulnerability on some host? We can again recall the Equifax case, but it can be anything.

Hah. The chances are great that in this case there will be a mass fighting and everyone will try to save their job and face, no matter what. It will be very convenient and natural to make a VM specialist the main or even the only responsible person for all this trouble.
So, I can only recommend you to prepare for this scenario. Treat any vulnerability as if it was ALREADY exploited by an attacker in a major incident, and everyone are going to blame you for it. Ask yourself honestly: what will you need in this worst-case scenario to cover your ass (yep, just like this)? What evidences will show clearly that you did everything that could be done in the given circumstances?

It may seem rude, formal and primitive, but it's actually a very powerful way of thinking. It will give you the best view on Vulnerability Management: what is important, what is not, what do you really need and why do you need it. Unlike all those "First steps to Vulnerability Management" marketing articles by VM vendors.
Now let's think how we can protect ourselves and make the Vulnerability Management process in the organization better.

When we have a serious IT security incident related to some unpatched vulnerability, this could happened because:

1) we did not find this vulnerability during the Vulnerability Assessment procedures ๐Ÿคช๐Ÿ™ˆ

or

2) we found it, but for some reason the issue was not fixed properly and on time ๐Ÿคฆ๐Ÿคทโ€โ™‚๏ธ
What if we did NOT find this vulnerability, what could be the reason?

1) Maybe this host was not in our Vulnerability Management scope and we simply did not scan it? Who is responsible for updating the scope? Maybe someone told us that we can't scan such systems, because they are too critical and sensitive?

2) Maybe we scanned this host but not very often, so the vulnerability was exploited before we had a chance to detect it? Why didn't we scan this host several times in a week? Who limited our scan rate?

3) Maybe we scanned the host with our vulnerability assessment tool, but there were no this vulnerability in the scan results? Did we scan with the authentication and had all the necessary permissions? If not, then who limited them and why?

Dealing with the hosts we have to assess, scan credentials and permissions, scan rate, etc. are the parts of a bigger Asset Management process, we should have it at least.
If we did everything as Vulnerability Management vendor recommends, including the up-to-date scanner engine and detection rules, and have not detected this vulnerability, the vulnerability assessment tool is probably not good enough and there are some questions how did chose it and who made the final decision to buy this garbage. ๐Ÿค”

So, to protect yourself:

-> track the hosts, credentials and regular scans; collect the evidence for each external requirement (most likely from your IT team) that limits your Vulnerability Assessment capabilities and make them all visible;

-> check the capabilities of your Vulnerability Assessment tool, use several different tools if it is possible.
What if we found this vulnerability, but for some reason it was not fixed properly and on time, what could be the reason?

1) Maybe IT administrators were not informed about this vulnerability? Obviously, if you want to update some vulnerable hosts, you must inform the responsible team of IT administrators about the problem. Things like "go to the web GUI of our great Vulnerability Management tool and find yourself what you have to patch" usually don't work. People want to know what exactly they have to do and why. Usually it is necessary to create some separate remediation tasks for each case and assign them to specific groups of IT administrators/devops. Just to make harder for them to ignore the problem. ๐Ÿ˜ It's great if you made a mutual agreement to run the process without all these tasks (perhaps by using reports or dashboards instead), but make sure that it really works.
2) Maybe IT administrators don't have formal requirements to fix such vulnerabilities in N days? If you have to prove that each vulnerability in the report is critical and exploitable or it won't be fixed, you will get huge amount of unnecessary work that can't be easily automated. It will be much easier for you if the IT administrators will have to patch vulnerabilities that match some formal criteria without asking additional questions, like, for example, PCI DSS requires us to do. These requirements should be added to the Security Policy of the organization. If you do this, it will be possible just to track the remediation tasks and send the reminders to the teams that are out of schedule. Otherwise continuous manual proofing and pushing will take all the time and efforts of security analysts.

3) Maybe IT administrators said that vulnerability can't be fixed? There would be some vulnerabilities that can't be fixed by a simple update and you should be ready to offer compensational measures and control the implementation of such measures. If nothing can be done, you should at least collect formal rejections from for the responsible IT administrators. All the rejections should be carefully documented and all stakeholders should be informed, so it would be possible to use this data in the case of real incident. Will this really protect you (a security analyst) in such case? Not really. But at least it makes the responsible people think about the possible consequences. Even if there were a decision to ignore some vulnerability and the risks were taken, you should still regularly update the status for each host, because in future circumstances and the final decision may change.

As you can see, Vulnerability Management process requires a lot of communication with IT teams responsible for the remediation. It looks like continuous selling "the need of patching" to your IT guys. Like any sale sometimes it goes easily, sometimes it requires a lot of pushing and discussing. The thing is that the Vulnerability Management vendors usually don't see this part of job and when you try to implement the VM process in your particular organization you face this problems alone and have to create something like a CRM system or it simply won't work. ๐Ÿ™‚

So, to protect yourself:

-> try to make a formal requirement in the organization "critical vulnerabilities should be fixed in N days";

-> make specific remediation tasks that couldn't be missed or misinterpreted by the teams of IT administrators;

-> track how quickly the teams of IT administrators close these tasks and if they don't meet the schedule - discuss and escalate the issue, if necessary;

-> track the implementation of countermeasures and all the rejections.

Remember, an ability of IT teams to patch their systems relatively quickly is crucial and without it all other metrics and prioritizations don't make much sense.
Yep, I finally added support for simple search requests in #Zbrunk. ๐Ÿ˜… You can get the events by event_type and time range. You can also delete these events if you set delete: "True" in the search request. See the examples in "MANUAL -> Test cases".

Currently it works quite primitively. I just make the mongo find (or remove) during the processing of POST request ๐Ÿคฆโ€โ™‚๏ธ. So, it will most likely crash if you try to process too many events at once. BUT I hope that it will be enough to start building some dashboards with it ๐Ÿ™ƒ.
The news that Rostelecom (Solar) will begin to provide Qualys Vulnerability Management services (rus) probably doesn't mean much on a global scale, but it's quite interesting for Russian market and for markets of other "countries with strict data sovereignty rules".

What problems we have with global cloud-based security solutions, including Vulnerability Management solutions? When the data about vulnerabilities of Russian organizations is stored and processed somewhere abroad and it is not clear how and by whom, (even if we are not even talking about the real threats) it's is a red flag for government regulators, like FSTEC. And they can easily make the usage of such services VERY complicated, at least among the customers that are somehow related to the government. The same restrictions stimulate the development of local security products, that's why we have local players on Russian #VulnerabilityManagement market, like Positive Technologies, Altx-Soft, NPO Echelon, etc.
BUT when a foreign security vendor delivers its solution in a form of Private Cloud through the largest Russian service provider, which also has Russian state as the main shareholder, it's is a different story. Data will be stored and processed in Russia, US vendor only updates the cloud platform, so what's the problem? If it will be needed, Rostelecom has enough resources to get all necessary certificates for this cloud service, and may re-label it as their own.

Currently it is not clear how much the offer from #Rostelecom will differ from the standard #Qualys services. Details of the deal are not publicly known. Will Rostelecom pay Qualys a fix, and Rostelecom will then try to monetize it? Will Qualys and Rostelecom share money from the actual customers somehow? Will Qualys pay Rostelecom for hosting, and the money from customers will go directly to Qualys? It's unclear now. Most likely 1 or 2, but they could agree on a very different terms. ๐Ÿ™‚

But in any case, the domestic Russian Vulnerability Management market might be shaken. And I think it's great. At least for the end-users. ๐Ÿ™‚ And when Tenable will someday release their own Private Cloud with Tenable.io, it will be even better. ๐Ÿ˜‰
H.R.2810 - National Defense Authorization Act for Fiscal Year 2018. IMHO, it's a great lesson for any foreign cybersecurity vendor who wants to work in a free and completely competitive US market. ๐Ÿ˜ No matter how many Transparency Centers you open and how global you are, it will be possible to label you as 'Evil Russians' (or Chinese, Iranians, Koreans, whatever) and ban without any real evidence. IMHO, this is nothing more than lobbying and protectionism. #kaspersky