It’s been close to a year since I started my journey in this bug bounty scene, and the community has taught me a lot of things. So it is a fair deal to try and give back to the community. So here it is, my second blog post about a tool I created that helps in continuous recon. I hope this helps you in some way.
There is a huge rush about the “recon” phase of hacking and I have been a fan of it for more than enough time but in my experience Organising and visualizing data in a form that can be easily analyzed and efficient seems like a more complex problem that running different tools to collect data about targets.
Tracking js file and website for any change was one of the tasks I wanted to automate.
But at first, I do try to this manually, by creating different file every time I enumerate and naming it with date prefix but for me at the end, I really got overwhelmed with the number of things that I have to go through, so at last, to solve this problem I tried to integrate automation more seriously in my workflow.
I had to do Four *mini* tasks to complete the whole automation cycle:
- Fetching data for a URL for the first time
- Saving properties like [respone code, content-length, location-header, lines, words]
- And then any time I fetch compare it from the stored result to see if there’s any change in those properties.
4. Displaying it in HTML format where I can search and sort as well.
The output of the Websy looks like this: