DerbyCon Talk: https://www.youtube.com/watch?v=RFxUfoVgMrw
After watching the DerbyCon presentation by Patrick Mathieu I’ve been experimenting with replacing or at least supplementing Dirbuster with this new tool. It offers a lot of potential benefits:
- Integration with BurpSuite
- “Seeded” brute-forcing – you can manually navigate to known pages within a site and the plugin will automatically add them to the spidered results as it is running
- Spidering of site content using robots.txt and sitemaps
- Based on the detected server it will dynamically search for common system directories, configuration files, etc. that correspond with that particular environment.
- 404 checks to reduce false positives for pages that don’t really exist
Some of the cooler potential features are still under construction, but will be awesome once implemented:
- More granular control of rate limiting. You can spider at a much slower rate to try to avoid tripping alarms.
- Scraping content from already spidered pages to create dynamic word lists to further use in dictionary-based attacks. They would be more targeted toward the particular organization vs using a generic or industry-specific word list
Even though the application is still a work in progress I’ll definitely be keeping up with its development as it seems like a promising tool to use in the future. In its current state it’s still very much useful as an additional tool for discovery of items that DirBuster would either miss entirely or take a long time to find.
Also it has given me some good ideas on how to do more targeting brute-forcing of sites.
- Scrape word lists from all known site content
- Find a way to query and add any related terms to build out a more enriched list
- Use these smaller targeted lists in initial scans before performing more extensive dictionary attacks to save time and perhaps get more meaningful results