Parallel recursive crawler using agents

The aim here is to demonstrate a method of distributing work using the built in F# agent across multiple nodes in parallel the result of crawling one page might result in finding multiple new pages to fetch. This is a recursive process which will continue until no new URLs are found. The main focus is how to process a potentially indefinite queue of work across a pool of workers, rather than how to parse web pages.