Here are some ideas for search strategies, especially if you face the difficult task of cleaning up thousands of duplicate files:
Start by searching for large file archives (*.zip) and eliminate as many duplicates as possible.
Then search for large, duplicate files by entering let's say 1000 kb as a minimum size and uncheck search in ZIP archives. That way, you can concentrate on files that will give you the most reward in free disk space. You can then lower this limit gradually and perform repeated searches. Use the CRC cache option to speed things up. If too many duplicates are found, enter a file mask, like *.doc to narrow the search to specific file types.
Next, try to find files that exist both as regular files and are contained in archives like ZIP-files. If a file is found in both places, and it is not in regular use, you should delete it from disk and retain the archived copy, since it most likely takes less disk space.
Finally, try to find old files by putting for example 12/31/2002 into Date last modified (the latter field). That way you will find some good, old files that probably would be better kept in archives, or moved to a CD, or some other off-line storage.
Just try various criteria according to your needs, and you will find more duplicates than you could ever dream of!
Working with huge files:
When working with big files, e.g. bigger than 100 Mb, start by unchecking File checksum in search options, but tick other search items (file name, type, date, time, size). This will speed things up, since calculating CRC for big files can be very time consuming.
The resulting list will give you ideas about how many duplicates do exist, and you might even want to eliminate some of them right away.
Finally, perform another search with File checksum and File size activated, and grab a cup of coffee or watch a game while AcuteFinder finds the remaining true duplicates.