Findomain is a all-in-one recon tool focused in automation. In this article we are not going to focus on teaching how to do subdomain discovery but on how to take advantage of the features that Findomain offers to establish an automation of your recon process.
Many of Findomain’s features are unknown to most users, here we are going to show you all.
Note: All commands that start with #
mean that they require root privileges, those that start with $
do not.
Downloading Findomain
You can compile Findomain from source or download precompiled binaries. There are a lot of distros that include Findomain in their repositories but to avoid giving instructions for each one, we will use the precompiled binaries that we have on our Github repo.
Tip: You can check with your package manager and see if your Linux distro has the Findomain package, if possible install it through your package manager.
# curl -L https://github.com/Edu4rdSHL/findomain/releases/latest/download/findomain-linux -o /usr/bin/findomain
# chmod +x /usr/bin/findomain
$ findomain
You should see the following output in your terminal emulator:
$ findomain
Findomain 3.0.1
Eduard Tolosa <[email protected]>
The fastest and cross-platform subdomain enumerator, do not waste your time.
USAGE:
findomain [FLAGS] [OPTIONS]
FLAGS:
-x, --as-resolver Use Findomain as resolver for a list of domains in a file.
--mtimeout Allow Findomain to insert data in the database when the webhook returns a timeout error.
... snip
Preparing the environment
You can use Findomain in Linux, Windows, MacOS and even in your Android installing the findomain
package from the official Termux repos. In this article we will focus on how to use the tool in the Linux operating system.
Dependencies
Since we are going to focus on automation in this article, we must make use of Findomain’s monitoring ability. For that we need:
- PostgreSQL: the database engine used by Findomain. PostgreSQL is the world’s most advanced open source relational database.
- Chromium: the browser used to take the screenshots.
You will probably find the PostgreSQL package on all Linux distributions, install it using your package manager.
- Debian
# apt install postgresql chromium
# systemctl enable --now postgresql
# mkdir /var/findomain
Now we need to modify the /etc/postgresql/13/main/pg_hba.conf
file and change the following lines:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
To
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
And then restart the postgresql service.
# systemctl restart postgresql
CAUTION: Configuring the system for local “trust” authentication allows any local user to connect as any PostgreSQL user, including the database superuser. If you do not trust all your local users, use another authentication method.
- ArchLinux
# pacman -S postgresql chromium
# su -l postgres
[postgres]$ initdb -D /var/lib/postgres/data
[postgres]$ exit
# systemctl enable --now postgresql
# mkdir /var/findomain
For ArchLinux, you don’t need additional steps.
Testing the database
Now that we have everything configured in our database, it is time to validate that it works correctly. For this we are going to make use of the --no-monitor
option that allows us to save data in the database without the need to configure any notification method.
NOTE: Debian uses 5433. If PostgreSQL runs on a port other than 5432, you need to specify it with the –postgres-port option.
$ findomain -t findomain.app --no-monitor --postgres-port 5433
Testing connection to database server…
Connected, performing enumeration!
Target ==> findomain.app
Searching in the CertSpotter API… 🔍
Searching in the Crtsh database API… 🔍
Searching in the Virustotal API… 🔍
Searching in the Sublist3r API… 🔍
Searching in the Facebook API… 🔍
Searching in the Spyse API… 🔍
Searching in the Bufferover API… 🔍
Searching in the Threatcrowd API… 🔍
Searching in the AnubisDB API… 🔍
Searching in the Urlscan.io API… 🔍
Searching in the Threatminer API… 🔍
Searching in the Archive.org API… 🔍
Searching in the Ctsearch API… 🔍
support.findomain.app
findomain.app
blog.findomain.app
Good luck Hax0r 💀!
Now we validate that the data has been saved:
$ psql -U postgres -c "select * from subdomains"
id | name | ip | http_status | open_ports | root_domain | jobname | timestamp
----+-----------------------+-------------+-------------+-------------+---------------+-----------+-------------------------------
1 | findomain.app | NOT CHECKED | NOT CHECKED | NOT CHECKED | findomain.app | findomain | 2021-02-09 22:03:04.434149-05
2 | blog.findomain.app | NOT CHECKED | NOT CHECKED | NOT CHECKED | findomain.app | findomain | 2021-02-09 22:03:04.434149-05
3 | support.findomain.app | NOT CHECKED | NOT CHECKED | NOT CHECKED | findomain.app | findomain | 2021-02-09 22:03:04.434149-05
(3 rows)
If you see output similar to the above, then everything you need to configure is done. Let’s continue with the subdomain monitoring configuration and integration with other tools.
Findomain monitoring and alerts
Findomain was a pioneer in implementing a subdomain monitoring system using a relational database. Findomain is capable of handling hundreds of millions of subdomains or even trillions as well as a few thousand.
The first thing we must do is choose our method(s) that we will use to receive notifications, can be Discord, Slack or Telegram. We will explain how to do it for Discord and Slack, if you want to configure Telegram you can read the documentation.
Getting the Discord Webhook
Create a Discord server and a text channel called #findomain-monitoring
then Go to Server Settings -> Integrations -> Webhooks -> New Webhook -> Change the channel to #findomain-monitoring and rename the webhook (optional) -> Save changes -> Copy Webhook URL
Now we have our Discord Webhook URL, let’s save it.
Getting the Slack Webhook
- Create a Slack workspace and a channel called
#findomain-monitoring
- then create a new Slack App
- Select “Incoming Webhooks”
- Activate the Incoming Webhooks and click “Add webhook to workspace” at the end of the page.
- Select your channel and click in “Allow”
Copy the webhook URL and save it.
Creating the config.toml file
The next step is to create our configuration file for Findomain. Findomain supports configurations in TOML, JSON, HJSON, YAML and INI, were we will use TOML which is the simplest and clearest format. You can get a sample file from here.
# Findomain configuration file. The following listed values are all the possible values you can set at the moment.
# IF YOU ARE NOT USING ONE OF THEM PLEASE LEAVE THIS EMPTY OR DELETE IT FROM THE FILE
postgres_connection = "postgresql://postgres:postgres@localhost:5433"
slack_webhook = "https://hooks.slack.com/services/XXXX/YYYY/ZZZZ"
discord_webhook = "https://discord.com/api/webhooks/XXXX/YYYY"
#virustotal_token = "Your VirusTotal token here"
#fb_token = "Your Facebook token here"
#securitytrails_token = "Your SecurityTrails token here"
#spyse_token = "Your Spyse token here"
#c99_api_key = "Your C99 API key"
#telegrambot_token = "Your Telegram bot token here"
#telegram_chat_id = "Your Telegram chat ID here"
#user_agent = "Your custom User Agent here"
#threads = "Number of threads here"
#rate_limit = "Number of seconds for the rate limit here"
#dbpush_if_timeout = "Enable --mtimeout flag"
#no_monitor = "Enable --no-monitor flag"
#exclude_sources = "List of sources to exclude separated by comma"
As you can see, Findomain has multiple options available in its configuration file, from API keys to setting flags and options. At the moment we are interested in the first three, so we will comment on the others and save the file as /var/findomain/config.toml
.
Integrating other tools
We know that only one tool is not sufficient for recon, so let’s integrate the passive results of the top four tools (OWASP Amass, Sublist3r, Assetfinder and Subfinder) in our process. For that we will use a bash script that will be in charge of executing the other tools and storing the results in a file and then it will execute Findomain with the needed flags for importing the results of the other tools for their respective processing.
Note: Please read the respective documentation for each tool to learn how to install it in your system, that’s a basic step outside the scope of this article.
#!/bin/bash
if [[ "$#" -ne 1 ]]; then
echo "Please specify a domains file."
echo "Usage: $0 domains_file.txt"
exit
fi
domains_file="$1"
total_file="all_external_subdomains.txt"
# Execute all external tools in parallel for more complete subdomains results
# and save all the results in a single file.
external_sources() {
local amass_file="amass_output.txt"
local subfinder_file="subfinder_output.txt"
local assetfinder_file="assetfinder_output.txt"
local sublister_file="sublister_output.txt"
local domain="$1"
touch "$amass_file" "$subfinder_file" "$sublister_file" "$assetfinder_file"
amass enum --passive -d "$domain" -o "$amass_file" >/dev/null &
subfinder -silent -d "$domain" -o "$subfinder_file" >/dev/null &
assetfinder -subs-only "$domain" > "$assetfinder_file" &
sublist3r -d "$domain" -o "$sublister_file" >/dev/null &
wait
cat "$amass_file" "$subfinder_file" "$sublister_file" "$assetfinder_file" > "$total_file"
rm -f "$amass_file" "$subfinder_file" "$sublister_file" "$assetfinder_file"
}
while IFS= read -r domain; do
if [ -n "$domain" ]; then
fixed_domain=${domain//$'\r'/}
external_sources "$fixed_domain"
findomain -t "$fixed_domain" --import-subdomains "$total_file" -o -c /var/findomain/config.toml -m -q --http-status -i --pscan --iport 1 --lport 65535 -s screenshots --mtimeout --threads 50
rm -f "$total_file"
fi
done < "$domains_file"
Save the bash script in /usr/bin/monitoring
and then execute chmod +x /usr/bin/monitoring
Start the monitoring process
Testing our configuration
Let’s create a file called /var/findomain/targets.txt
with the targets that you want to monitor, one per line. For example:
findomain.app
discord.com
Now execute:
$ monitoring /var/findomain/targets.txt
After a few minutes you will start receiving the alerts:
Discord
Slack
If everything goes like this, it means that everything is correct and ready to automate all our recon.
Automating the execution
Everything works fine, but having to manually run the same command once the process finishes is definitely not a good idea, that’s why we will make use of a systemd service and timer. The service will be in charge of executing the commands we need, while the timer will be in charge of activating the service from time to time.
Save the following content in /etc/systemd/system/findomain.service
[Unit]
Description=Monitor subdomains with Findomain.
After=network-online.target
[Service]
WorkingDirectory=/var/findomain
LimitNOFILE=49152
ExecStart=/bin/bash -c "monitoring /var/findomain/targets.txt"
StartLimitBurst=0
Restart=on-failure
RestartSec=50min
KillMode=process
KillSignal=SIGINT
[Install]
WantedBy=default.target
and this other in /etc/systemd/system/findomain.timer
[Unit]
Description=Monitor subdomains with Findomain - Timer.
[Timer]
OnBootSec=10min
OnUnitActiveSec=1h
[Install]
WantedBy=timers.target
Now we need is to enable the timer, for that we use the following command:
# systemctl enable --now findomain.timer
From this moment on, a search for new subdomains is carried out every hour and you will receive the alerts in the webhooks that you have configured with all the data for the new subdomains such as IP, HTTP Status and Open Ports. Additionally, in the /var/findomain/screenshots
folder you will have all the screenshots of the web pages of the subdomains that have an HTTP server running.
Imagine getting the HTTP and HTML data, screenshots from the websites, running ffuf, nuclei and nmap for the new subdomains and that everything arrives attached to your email… including high and critical vulnerabilities as shown in the picture:
If you want it, check out our Monitoring and Vulnerability discovering service.
There are a few tweets that will help you to play with Findomain:
- https://twitter.com/FindomainApp/status/1357180641524219910
- https://twitter.com/FindomainApp/status/1353385099501441032
- https://twitter.com/FindomainApp/status/1353993541303857152
We hope you liked this article, if so, share!
Regards,
Findomain Team