Subdomain-Crawler

module
v0.0.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 5, 2023 License: MIT

README

Subdomain Crawler

The program aims to help you collect subdomains of a list of given second-level domains (SLD).

Installation

  • Option 1: Download from GitHub Releases directly (Recommended)

  • Option 2: Go Install

    $ go install github.com/WangYihang/Subdomain-Crawler/cmd/subdomain-crawler@latest
    

Usage

  1. Edit input file input.txt
$ head input.txt
tsinghua.edu.cn
pku.edu.cn
fudan.edu.cn
sjtu.edu.cn
zju.edu.cn
  1. Run the program
$ subdomain-crawler --help
Usage:
  subdomain-crawler [OPTIONS]

Application Options:
  -i, --input-file=    The input file (default: input.txt)
  -o, --output-folder= The output folder (default: output)
  -t, --timeout=       Timeout of each HTTP request (in seconds) (default: 4)
  -n, --num-workers=   Number of workers (default: 32)
  -d, --debug          Enable debug mode
  -v, --version        Version

Help Options:
  -h, --help           Show this help message

$ subdomain-crawler
  1. Check out the result in output/ folder.
$ head output/*

Directories

Path Synopsis
cmd
pkg

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL