头图:https://cdn.naraku.cn/imgs/Crawlergo_X_Xray-0.jpg
摘要:Linux搭建Crawlergo+Xray+Httprobe实现自动化批量检测

环境

前期准备

  1. # 安装依赖
  2. $ sudo apt-get update
  3. $ sudo apt-get upgrade
  4. $ sudo apt-get install unzip git tree
  5. $ sudo apt install python3 # Python3
  6. $ sudo apt install python3-pip # Pip3
  7. $ pip3 install simplejson requests fake_useragent

Crawlergo_X_Xray

  • 配置crawlergo_x_XRAY

    1. $ git clone https://github.com/timwhitez/crawlergo_x_XRAY.git
    2. $ cd crawlergo_x_XRAY/
    3. $ vim launcher.py
  • 修改launcher.py ```diff [-] cmd = [“./crawlergo”, “-c”, “C:\Program Files (x86)\Google\Chrome\Application\chrome.exe”,”-t”, “20”,”-f”,”smart”,”—fuzz-path”, “—output-mode”, “json”, target]

[+] cmd = [“./crawlergo”, “-c”, “/snap/bin/chromium”,”-t”, “20”,”-f”,”smart”,”—fuzz-path”, “—output-mode”, “json”, target]

  1. - 整理一下
  2. ```shell
  3. # 删除多余文件
  4. $ rm launcher_new.py README.md
  5. $ rm -rf img/
  6. # 将2个文件移入crawlergo目录
  7. $ mv launcher.py crawlergo/
  8. $ mv targets.txt crawlergo/

下载解压_linux_amd64

$ cd ../xray/ $ wget https://github.com/chaitin/xray/releases/download/1.5.0/xray_linux_amd64.zip $ unzip xray_linux_amd64.zip && rm xray_linux_amd64.zip

生成证书

$ ./xray_linux_amd64 genca

  1. <a name="Chromium"></a>
  2. ### Chromium
  3. ```shell
  4. # 安装Chromium
  5. $ sudo apt install chromium-browser
  6. $ yum install chromium
  7. # 安装证书
  8. $ sudo cp ca.crt /usr/local/share/ca-certificates/xray.crt
  9. $ sudo update-ca-certificates

存活探测

  • 安装httprobe ```shell $ cd ../

下载并解压httprobe

$ wget https://github.com/tomnomnom/httprobe/releases/download/v0.1.2/httprobe-linux-amd64-0.1.2.tgz $ tar zxvf httprobe-linux-amd64-0.1.2.tgz $ rm httprobe-linux-amd64-0.1.2.tgz

新建扫描文件

$ vim check.py

  1. - `check.py`,运行前需要先填写`chromium``httprobe`的路径
  2. - `python3 check.py -f domains.txt`
  3. - `domains.txt`中域名的不能带协议`http(s)://`,否则不能正确探测存活
  4. ```python
  5. # coding: utf-8
  6. import os
  7. import re
  8. import time
  9. import argparse
  10. import subprocess
  11. # Path
  12. chrome_path = r'/snap/bin/chromium'
  13. httprobe_path = r'/root/crawlergo_x_XRAY/httprobe'
  14. save_dir_name = './'
  15. def parse_args():
  16. usage = "python3 check.py -f domains.txt"
  17. parser = argparse.ArgumentParser(usage=usage)
  18. parser.add_argument('-f', '--file', help='Input Domains File', type=str)
  19. return parser.parse_args()
  20. def do_httprobe():
  21. path = args.file
  22. if os.name == 'nt':
  23. httprobe_result = os.popen(f'type {path} | {httprobe_path}').read()
  24. elif os.name == 'posix':
  25. httprobe_result = os.popen(f'cat {path} | {httprobe_path}').read()
  26. else:
  27. print('[-] Unable to identify operating system')
  28. save_path = os.path.join(save_dir_name, 'targets.txt')
  29. with open(save_path, 'w+', encoding='utf-8') as file_obj:
  30. file_obj.write(httprobe_result)
  31. file_name = file_obj.name
  32. print('[+] Alive subdomain is saved in %s' % file_name)
  33. def main():
  34. if not os.path.exists(args.file):
  35. print(f'[*] {args.file} have error, Please Check.')
  36. else:
  37. do_httprobe()
  38. if __name__ == '__main__':
  39. args = parse_args()
  40. main()

Run.sh

  • crawlergo_x_XRAY/目录,创建run.sh脚本 ```shell

    存活探测

    if [[ $1 == “check” ]] then
    1. if [ $2 ]
    2. then
    3. python3 check.py -f $2
    4. else
    5. echo "[-] No Domain File"
    6. echo "Example: bash run.sh check domain.txt"
    7. fi

开启HTTP

elif [[ $1 == “http” ]] then python3 -m http.server 80

漏洞挖掘

elif [[ $1 == “start” ]] then today=date +%Y%m%d-%H%M%S echo “[+] Start at “ $today

  1. # 进入xray目录,后台运行xray,将扫描结果输出到html,运行日志输出到logs
  2. cd xray/
  3. nohup ./xray_linux_amd64 webscan --listen 127.0.0.1:7777 --html-output $today.html >../logs.xray 2>&1 &
  4. echo "[+] Xray Run Success..."
  5. sleep 3
  6. # 进入crawlergo目录,运行launcher.py
  7. cd ../crawlergo/
  8. nohup python3 launcher.py >../logs.crawlergo 2>&1 &
  9. echo "[+] Crawler_X_Xray Run Success..."

使用方法

else echo “””Usage: 存活探测: bash run.sh check 开启HTTP: bash run.sh http 漏洞挖掘: bash run.sh start “”” fi

  1. - 当前目录结构如下:
  2. ![](https://cdn.naraku.cn/imgs/Crawlergo_X_Xray-1.jpg#height=264&id=gYnEe&originHeight=264&originWidth=338&originalType=binary&ratio=1&status=done&style=none&width=338)
  3. ```json
  4. ├── check.py
  5. ├── crawlergo
  6. │ ├── crawlergo
  7. │ └── launcher.py
  8. ├── httprobe
  9. ├── run.sh
  10. └── xray
  11. ├── ca.crt
  12. ├── ca.key
  13. ├── config.yaml
  14. └── xray_linux_amd64

测试

  1. $ cd ../
  2. $ which chromium
  3. $ ./xray/xray_linux_amd64 version
  4. $ ./crawlergo/crawlergo -c /snap/bin/chromium -t 5 http://www.baidu.com

运行

  • 把子域名填入domains.txt,进行存活探测。域名不用带http(s)://协议

    1. $ bash run.sh check domains.txt
    2. # 或者: python3 check.py -f domains.txt
  • 然后将生成的targets.txt移动到crawlergo目录,然后运行

    1. $ mv targets.txt crawlergo/
    2. $ bash run.sh start
  • 查看与终止

    1. $ ps -ef # 查看进程
    2. $ kill -9 <pid> # 终止进程

Crawlergo_X_Xray - 图1

  • 若要查看或者下载漏洞报告,使用Python开个http即可
    1. $ bash run.sh http
    2. # 或者: python3 -m http.server 8080

其他

参考

注意

  • 进行存活探测的域名不需要加http(s)协议
  • 进行爬虫的域名需要加http(s)协议

反连平台

  1. # 反连平台配置,更多解释见 https://docs.xray.cool/#/configration/reverse
  2. # 注意: 默认配置为禁用反连平台,这是无法扫描出依赖反连平台的漏洞,这些漏洞包括 fastjsonssrf 以及 poc 中依赖反连的情况等
  3. reverse:
  4. http:
  5. enabled: true
  6. listen_ip: <IP>
  7. listen_port: <PORT>
  8. client:
  9. http_base_url: "http://<IP>:<PORT>" # 默认将根据 ListenIP ListenPort 生成,该地址是存在漏洞的目标反连回来的地址, 当反连平台前面有反代、绑定域名、端口映射时需要自行配置