Thursday, August 18, 2022

Remotely managing Sysmon configuration through Graylog Sidecar

Introduction:

Sysmon is a tool from Microsoft that can help with collecting better logs (compared to default Windows logs) regarding the system. The logs can be very helpful for detection of malicious behavior. Sysmon get's installed as a service and a driver usually along with whatever configuration file you provided. 

It can be kinda annoying to update sysmon configuration as sysmon doesn't come with a remote management system. If you have group of PC's that need different configuration, it can be a bit annoying to go push that out. 

There are some options for updating sysmon configs. 

  • You could setup a scheduled task that runs a script to look for a new config and does update. 
  • You can also use EDR tools that you may already have in place to run scripts or commands to update sysmon. 
  • You can utilize one of the Windows remote management features such as remote powershell. 
  • Ansible would work too. It would let you update sysmon config based on groups as Ansible lets you put machines into groups.
Some of the issues you may have with update methods above is 
  • problem with managing different group of PC's 
  • updates not being pushed out as soon as possible 
  • having to open up ports/services on Windows that you'd rather not.

I decided to go with Graylog sidecar for managing configuration. Graylog sidecar is usually used to manage configuration for log shippers (Beats for example) but it can be adapted to manage sysmon configuration as well!! Sidecar runs as an agent that will connect to Graylog server to get sysmon config updates. 

Tools:

Graylog & sysmon, obviously.

You need graylog server installed. You can technically use graylog sidecar without using graylog to store your windows or sysmon logs.

I'm using an admin account for the POC but use whatever the appropriate account is for your requirements.

Setting up sidecar:

We'll manually set this up first but you can deploy sidecar agent and sysmon, and install both at the same time by creating a package or an initial installation script.

First, we'll need to create a sidecar token.


Go to the sidecars page, create a new token.



Next, on Windows host, download and install sidecar agent


By default, Sidecar files are located at: C:\Program Files\Graylog\sidecar

After installation, run the following commands (as admin) to install graylog as a service and start the service:

"C:\Program Files\graylog\sidecar\graylog-sidecar.exe" -service install

"C:\Program Files\graylog\sidecar\graylog-sidecar.exe" -service start

(you can do all of this automatically if you build a package for your organization)

The host should show up on Sidecar overview page



Setting up sysmon support on Windows host:

Now that sidecar connection works fine, we need to setup sysmon. I'm placing sysmon executable at C:\Program Files\Graylog\sidecar\sysmon64.exe (make sure its lower case or at least the case matches the sidecar.yml config file)

Next, we need to edit sidecar.yml file to allow the use of sysmon64.exe (again, you can build a package and include sidecar.yml file that already supports this)

Here's what I have in the config file:


Restart the graylog sidecar service to ensure that it starts up again and sidecar.yml config file doesnt have any errors:


Next, we'll need to install sysmon w/ initial configuration (do this during initial graylog sidecar agent installation)


Now we can setup sysmon configuration in sidecar to do updates.

Setting up sysmon sidecar configuration in Graylog:

Go to collection configuration page in Graylog and click Create Log Collector button.



Create something that looks like this:


We're using foreground execution since we just need to execute the command to update sysmon config and exit.

The update command is sysmon -c CONFIG_FILENAME so we're using -c "%s", %s is the filepath of our config, when it's written to the disk.

Default template can be used to use a default config but I'm leaving it empty here.

Next on Collection configuration page, click Create Configuration


Name your configuration and add the xml config content.

You can create and add more configurations for different systems you may have.

Pushing configuration to a host:

Go to the sidecar Administration page


Check Sysmon and select the right sysmon configuration and apply the configuration.


Graylog webui may say the update failed but you can click the host you updated and click Show details to see more:


Above you can see that the configuration updated without an issue. 

You can also confirm the update by looking at event id 16 from sysmon in your SIEM or Event Viewer like below



Building initial installation package recommendations:

Your initial sidecar agent package should do the following:

  • Drop sysmon executable in the sidecar folder
  • Sidecar.yml file needs to contain path for sysmon executable in the allowed files
  • Install sysmon with whatever initial configuration you'd like to use
If you build an installation package yourself, you don't have to do all the stuff manually above...

Once you deploy sidecar agent + sysmon initially, you can remotely manage the sysmon configuration through Graylog sidecar UI.

Links/Resources:

https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon

https://www.graylog.org/

https://www.graylog.org/features/sidecar

https://github.com/Graylog2/collector-sidecar/releases

https://docs.graylog.org/docs/sidecar

https://github.com/olafhartong/sysmon-modular

https://github.com/SwiftOnSecurity/sysmon-config

https://github.com/LaresLLC/SysmonConfigPusher - I came across this after finishing this write up...

Wednesday, July 20, 2022

Screenshotting/scanning domains from certstream with littleshot to find interesting content

Introduction:

Certstream is a great service which provides updates from Certificate Transparency Log, which has info regarding certs being issued from several providers.

Certstream data has been used in the past for detection of malicious sites or phishing sites. There are several links in the resources section about certstream usage.

Littleshot is a tool similar to urlscan and urlquery(RIP) which I wrote a while ago because I wanted to be able to screenshot a ton of sites and collect metadata regarding them. (It's here: https://github.com/BoredHackerBlog/littleshot) I realized having yara scan html body would be cool so I added that feature as well later on. There is also a branch that uses tor for connections. It's not the most optimized project and error handling isn't the best but it's good enough for my purposes.

You can also put newly registered domains through littleshot as well but I've decided not to do that for now.

Goals:

- Take certstream domains and scan them with littleshot

- Utilize yara rules to look for interesting pages

- Send some metadata to Humio (littleshot by default doesn't do this) for either alerting, dashboarding, or just searching.

- Ensure that there is caching of domains from certstream to avoid rescanning domains

Tech stack:

I'm hosting everything on vultr. (Here's a ref link if you'd like to try vultr for your projects: https://www.vultr.com/?ref=8969054-8H)

- Littleshot

-- caddy - reverse proxy

-- flask - webapp

-- redis - job queue

-- python-rq - job distribution/workers

-- mongodb - store json documents/metadata

-- minio - store screenshots

- Certstream + python - Im getting certstream domains and doing filtering and cache lookup with python

- Memcached - Caching. I wanna avoid scanning the same domain twice for a while so i'm using memcached

Setup:

The diagram below shows the setup I have going.


I get data from certstream and I'm using some filtering to ensure that I don't scan certain domains.

Once the keyword based filtering is done, I check the domain against memcached to ensure that it hasn't been scanned before in the past 48 hours.

If the domain wasnt scanned in the past 48 hours, I queue to be scanned with littleshot.

When littleshot worker does the scan, it sends taskid, domain, title, and yara matches to Humio (besides just doing normal littleshot things).

Certstream_to_littleshot script - https://github.com/BoredHackerBlog/certstream_to_littleshot/blob/main/certstream_to_littleshot.py

Yara rules (these aren't the best. you should probably write your own based on your needs) - https://github.com/BoredHackerBlog/certstream_to_littleshot/blob/main/rules.yar

Worker code to support sending data to Humio - https://github.com/BoredHackerBlog/certstream_to_littleshot/blob/main/worker.py


Interesting stuff I came across:

- Lots of wordpress and nextcloud/owncloud sites and general stuff people self-host
- Carding forum?

- Argo CI/CD without auth?
- Piracy site


No phishing sites or C2 with at least my yara rules.

Here are the yara hits in Humio (ignore abc,xyz, that was me testing Humio API):



What I would do differently with more time and resources (with this project and with littleshot):

- Better error handling - Current error handling is meh
- Get rid of mongodb and replace it with opensearch or graylog maybe? - Opensearch and graylog are great when it comes to searching.
- Potentially having a indicator list built into littleshot?
-- Currently tagging is based on yara rules but there are many ways to detect maliciousness, such as hash or URLs.
- Enrichment of data like urlscan does
- Better webui - the webui is pretty shit. idk enough html/css/javascript
- Better logging. There is logging of results but no logging of anything else (queries, crashes, etc...)
- Redirect detection & tagging. Some domains do redirect to legitimate login pages.


Resources & similar projects:

https://github.com/ninoseki/uzen - ninoseki github has really cool projects. This one is very similar to littleshot actually.
https://github.com/InfosecExtra/StreamshotY - littleshot fork that someone hooked up with certstream. It has a refreshing page of screenshots too like urlscan.


(if the blog post formatting looks odd, it's because Blogger editor interface hates me)

Wednesday, July 13, 2022

Building a honeypot network with inetsim, suricata, vector.dev, and appsmith

I wanted to learn a bit more about data engineering, databases, app building, managing systems, and so on so I decided to work on a small honeypot network as a project. I was partially inspired by Greynoise and AbuseIPDB, I use both of those a lot. I wanted to get this project done in about a week so this is a small project which isn't too scalable. I ended up learning things so it's fine.

My goals:

- Use Suricata to see what type of signatures are triggered based on the incoming traffic from the internet
- Save all the Suricata logs to disk in a central place so I can go back and search all the data or reingest the data.
- Send logs to Humio for searching, dashboarding, and potentially alerting purposes
- Have a webapp for searching for an IP
-- Webapp should show the signatures the IP has triggered, first time the IP was seen, last time the IP was seen, and number of times it was seen triggering signatures.

My tech stack:

- Sensors & databases are hosted on Vultr w/ Ubuntu
- Obviously Suricata for detecting attack attempt type
- Inetsim - this is not the best (i'm letting the attackers know I'm not running any real services, it's just inetsim, assuming attackers manually go look at the scan results) but it'll do for this project
- Zerotier - all sensors are connected to a zerotier network, it just makes networking, moving data around, and management easier
- Vector.dev - I'm using vector.dev to move data around
- Humio - it's for log storage and search, just like ELK or Splunk
- rinetd - I'm actually not running inetsim on all the sensors, I'm just forwarding all the traffic from sensors to one host running inetsim (it's good enough for this project)
- Redis - pubsub. I'm putting alerts into redis and letting python grab them and put the data in postgresql
- Postgresql - to store malicious IP, signature, and timestamp
- Appsmith - to make webui app (usually i'd use flask...)

Networking:

Network kinda looks like this w/ Zerotier:

Sensors are exposed to the internet, servers aren't. rinetd takes in sensor traffic from the internet and forwards it to inetsim. inetsim is bound to zerotier IP address.


Logging:

The flow for logs kinda looks like this:

Vector on all the sensors reads eve.json, sends the data to vector on the ingest server.
Vector on the ingest server does multiple things. It'll save data to disk, send the data to humio, the alerts will get geoip info added, then it'll go to redis, python will ingest data from redis then put it into postgres.

postgres stores malicious IP, suricata signature, and timestamp.



Python script being used to process redis data and add data to postgres: https://github.com/BoredHackerBlog/dumbhoneypot/blob/main/process_redis.py


Webapp

I used AppSmith for the webapp. AppSmith allows you to build a webapp and connect it to integrations it supports with little to no coding. 

For webapp, I just have an input field and some queries running based on the input. It looks like this:




What would I do different if I had more time and resources:
- I'd probably setup a more realistic honeypots or have honeypot profiles
- Put honeypot software on the sensor itself instead of doing rinetd
- Ship logs through the internet (not zerotier)
- Do geoip enrichment on the sensor itself
- Store alert data in opensearch or some cloud hosted database that I don't have to maintain?
- Add health monitoring for sensor, pipeline, etc..
- Better deployment and update (of software and suricata signatures) potentially through ansible?

There are probably many other things that can be done differently or more efficiently.


Resources/links:

Saturday, March 5, 2022

Quick analysis of stealer malware sent via discord

Introduction:

Just a quick analysis of malware sent via discord...

I got the malicious file from someone who received the file via Discord from a trusted account (which was compromised...)


Analysis:

Hash: 4f709e1c6951bbd65d03a9f44961e0ae

Original filename:Fruit_of_the_ace_v3.11.99.exe

The file looked like nodejs binary

pdb string: C:\Users\runneradmin\AppData\Local\Temp\pkg.3d5278e5642d39a96bc8ed09\node\out\Release\node.pdb


I started by analyzing the file locally but didn't get anywhere quickly so I moved to hatching triage for analysis.

Results: https://tria.ge/220220-wnqp3sbeh6


Here's the process list:


The executable drops and starts temp.ps1, which contains code to hide the window.


Once the window is hidden, it seems to download and execute MachineMania.exe which is a python executable file made with pyinstaller. Argument provided to the machinemania.exe during executing is a discord webhook.

Looking through Triage and file access I see the following:


It appears to be looking for applications listed above but I'm not sure if it actually does anything if the apps are installed.


Network traffic kinda looks like this:

It looks like there is a connection to OneDrive. I was pretty sure that's where the MachineMania.exe was being downloaded from but I didn't see much in the pcap or Triage output.

I went back to debugger and found the exact OneDrive link in the memory which is hosting MachineMania.exe



Looking at MachineMania.exe:

Hash: 725918a6ae94e864908946ebb5e98dee

This is pyinstaller file. 

I analyzed that file in Triage but I replaced discord webhook with webhook.site webhook

https://tria.ge/220220-ycs26scgdr/behavioral1

File interaction looks a bit different in analysis of this file. Unlike the original file, this file only looks at Chrome folder.



The webhook requests look kinda like this:



It appears to be only looking at Chrome. 


To further analyze the file, I started to decompile the pyinstaller file to see what it has inside of it.

I used this to extract the files: https://github.com/extremecoders-re/pyinstxtractor

I found the following files to be interesting


discordwebhook content looks like it's just doing POST request.

I wasn't able to get anything out of chrome.pyc file but it appears to be backed with pyarmor.


Additionally, while doing some analysis of the original file in debugger I also saw this:



I'm not really sure about what other capabilities the original file has. There is probably more but I mostly looked at the dropped pyinstaller/machinemania.exe file. I'm not sure about how nodejs code can be packed as executable.


Conclusion

Based on limited and quick analysis, it looks like compromised account will share the exe file.

original exe file will download and execute file from onedrive

onedrive file will steal data and send it to discord via webhook.

to me, the onedrive file only seems to be stealing chrome saved passwords and nothing else.


Similar sample: https://twitter.com/GlitchyPSI/status/1439674473515569154

https://www.virustotal.com/gui/file/60e75541c4b4130151fb2b80f04cd699ba0e66bf6c4ec364127e93a38dccefa9/relations

Check the Execution Parents. There are a lot of node binaries and filenames usually look like names of games.