When copying files in HDFS, normally the target file cannot already exist. This involves doing a remove and then a copy to ensure the copy is successful. While working on a Pig script to copy files to an HDFS directory, I found a post from Witty Keegan about an undocumented feature of Hadoop's cp command. You can use cp -f to overwrite existing files just like you can within Unix.
This is a helpful option for some situations.