监控拓展方式

  1. #Github监控
  2. 便于收集整理最新exp或poc
  3. 便于发现相关测试目标的资产
  4. #各种子域名查询
  5. #DNS,备案,证书
  6. #全球节点请求CDN
  7. 枚举爆破或解析子域名对应
  8. 便于法相管理员相关的注册信息
  9. #黑暗引擎相关搜索
  10. fofa shodan zoomeye
  11. #微信公众号接口获取
  12. #内部群内部应用内部接口

资产收集方法

信息收集——资产监控拓展 - 图1

演示案例

监控最新的EXP发布及其他

  1. #Title: wechat push CVE-2020
  2. #Date: 2020-5-9
  3. #Exploit Author: weixiao9188
  4. #Version: 4.0
  5. #Tested on: Linux,windows
  6. #cd /root/sh/git/ && nohup python3 /root/sh/git/git.py &
  7. #coding:UTF-8
  8. import requests
  9. import json
  10. import time
  11. import os
  12. import pandas as pd
  13. time_sleep = 60 #每隔 20 秒爬取一次
  14. while(True):
  15. headers1 = {
  16. "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)
  17. Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3741.400 QQBrowser/10.5.3863.400"}
  18. #判断文件是否存在
  19. datas = []
  20. response1=None
  21. response2=None
  22. if os.path.exists("olddata.csv"):
  23. #如果文件存在则每次爬取 10 个
  24. df = pd.read_csv("olddata.csv", header=None)
  25. datas = df.where(df.notnull(),None).values.tolist()#将提取出来的数据中的 nan 转化为 None
  26. requests.packages.urllib3.disable_warnings()
  27. response1 = requests.get(url="https://api.github.com/search/repositories?q=CVE2020&sort=updated&per_page=10",headers=headers1,verify=False)
  28. response2 =
  29. requests.get(url="https://api.github.com/search/repositories?q=RCE&ssort=updated&per_page=10",hea
  30. ders=headers1,verify=False)
  31. else:
  32. #不存在爬取全部
  33. datas = []
  34. requests.packages.urllib3.disable_warnings()
  35. response1 = requests.get(url="https://api.github.com/search/repositories?q=CVE2020&sort=updated&order=desc",headers=headers1,verify=False)
  36. response2 =
  37. requests.get(url="https://api.github.com/search/repositories?q=RCE&ssort=updated&order=desc",heade
  38. rs=headers1,verify=False)
  39. data1 = json.loads(response1.text)
  40. data2 = json.loads(response2.text)
  41. for j in [data1["items"],data2["items"]]:
  42. for i in j:
  43. s = {"name":i['name'],"html":i['html_url'],"description":i['description']}
  44. s1 =[i['name'],i['html_url'],i['description']]
  45. if s1 not in datas:
  46. #print(s1)