ICP not designed for web accelerators. It was designed back in the days of the world wide wait. Those where the days when you where lucky to have a 256Kbps connection split amongst 20 people. At the time every network with clients deployed Squid to accelerate their web access. Some people had more then one Squid server and so the need arose for the Squids to coordinate their efforts.
Fetching a object over an overloaded 256Kbps line has a huge cost associated with it and doing some extra work with a neighboring cache over a local 100Mbps connection is cheap if you can save the work of refetching the object again. So, ICP, made a lot of sense back then.
Today, the situation is somewhat different. Varnish Cache servers regularly run at 10000 requests a second and slowing down content delivery with just a few milliseconds can have a huge performance impact. In most cases fetching a single object from your backend isn't really an issue, it's the volume of requests that might kill your server.
Second, the ICP protocol facilitates duplication of content across caches which I think is wasteful. If you have to 16GB caches you really want to cache 32GB of content - not 16GB. By deploying a simple thing such as target URL hashing you'll make sure that each and every object only exists in one cache in your cluster.
If you must, you can emulate ICP-like behaviour by writing VCL code to check your neighouring cache for the request that you are looking for. Setups like these have been discussed on varnish-misc several times and implementing it should be fairly simple. I believe you could even to target URL hashing if you have more then two servers so the duplication won't be as bad.