wapiti - python web scanner

下载地址:

https://sourceforge.net/projects/wapiti/
参考资料:http://wapiti.sourceforge.net/

安装:

1
2
3
4
5
6
7
8
9
10
cd /opt;
python3 setup.py install
cd /opt/wapiti3-3.0.1
wapiti -u http://192.168.241.136/ --v 2
python wapiti http://192.168.241.136/
http://192.168.241.136/ldap/example1.php?username=hacker&password=hacker
安装失败

安装二:
apt-get install wapiti

使用

测试一下对于ldap漏洞的fuzz,结果是并没有识别出漏洞。但是该工具对于sql注入等漏洞有着不错的识别效果。

/usr/bin/wapiti -u http://192.168.241.136/ldap/example1.php?username=hacker&password=hacker

可识别漏洞类型

1
2
3
4
5
6
7
8
9
10
11
12
backup:此模块搜索服务器上脚本的备份。
blindsql:基于时间的盲sql扫描程序。
crlf:在HTTP头中搜索CR / LF注入。
exec:用于检测命令执行漏洞的模块。
file:搜索include()/ fread()和其他文件处理漏洞。
htaccess:尝试绕过弱的htaccess配置。
nikto:使用Nikto数据库搜索有潜在危险的文件。
permanentxss:寻找永久的XSS。
sql:基于标准错误的SQL注入扫描程序。
xss:用于XSS检测的模块。
buster:用于文件和目录破坏程序攻击的模块 - 检查“坏”文件。
shellshock:用于Shellshock错误检测的模块。

工具参数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
optional arguments:
-h, --help show this help message and exit
-u URL, --url URL The base URL used to define the scan scope (default
scope is folder)
--scope {page,folder,domain,url}
Set scan scope
-m MODULES_LIST, --module MODULES_LIST
List of modules to load
--list-modules List Wapiti attack modules and exit
-l LEVEL, --level LEVEL
Set attack level
-p PROXY_URL, --proxy PROXY_URL
Set the HTTP(S) proxy to use. Supported: http(s) and
socks proxies
-a CREDENTIALS, --auth-cred CREDENTIALS
Set HTTP authentication credentials
--auth-type {basic,digest,kerberos,ntlm}
Set the authentication type to use
-c COOKIE_FILE, --cookie COOKIE_FILE
Set a JSON cookie file to use
--skip-crawl Don't resume the scanning process, attack URLs scanned
during a previous session
--resume-crawl Resume the scanning process (if stopped) even if some
attacks were previously performed
--flush-attacks Flush attack history and vulnerabilities for the
current session
--flush-session Flush everything that was previously found for this
target (crawled URLs, vulns, etc)
-s URL, --start URL Adds an url to start scan with
-x URL, --exclude URL
Adds an url to exclude from the scan
-r PARAMETER, --remove PARAMETER
Remove this parameter from urls
--skip PARAMETER Skip attacking given parameter(s)
-d DEPTH, --depth DEPTH
Set how deep the scanner should explore the website
--max-links-per-page MAX
Set how many (in-scope) links the scanner should
extract for each page
--max-files-per-dir MAX
Set how many pages the scanner should explore per
directory
--max-scan-time MINUTES
Set how many minutes you want the scan to last (floats
accepted)
--max-parameters MAX URLs and forms having more than MAX input parameters
will be erased before attack.
-S FORCE, --scan-force FORCE
Easy way to reduce the number of scanned and attacked
URLs. Possible values: paranoid, sneaky, polite,
normal, aggressive, insane
-t SECONDS, --timeout SECONDS
Set timeout for requests
-H HEADER, --header HEADER
Set a custom header to use for every requests
-A AGENT, --user-agent AGENT
Set a custom user-agent to use for every requests
--verify-ssl {0,1} Set SSL check (default is no check)
--color Colorize output
-v LEVEL, --verbose LEVEL
Set verbosity level (0: quiet, 1: normal, 2: verbose)
-f FORMAT, --format FORMAT
Set output format. Supported: json, html (default),
txt, openvas, vulneranet, xml
-o OUPUT_PATH, --output OUPUT_PATH
Output file or folder
--no-bugreport Don't send automatic bug report when an attack module
fails
--version Show program's version number and exit

-n:使用相同的模式定义要读取的URL限制(防止无限循环),此处限制为10。
-b:设置扫描范围,这里我们分析与传递的URL在同一域中的所有页面的链接。
-u:使用颜色突出显示输出中的漏洞参数。
-v:定义详细级别,这里我们打印每个URL。
-f:定义报告类型,这里我们选择HTML格式。
-o:定义报告目标,在我们的例子中它必须是一个目录,因为我们选择HTML格式。