On Wed, Sep 26, 2012 at 3:32 PM, Keith Roberts wrote:
On Wed, 26 Sep 2012, Earnie Boyd wrote:
On Wed, Sep 26, 2012 at 2:08 PM, Keith Roberts wrote:
Surely, would it not be smarter and cleaner to just move all the .txt files to a separate directory called 'docs', and then add a robot.txt exclusion rule to stop these from being indexed by search engines?
That doesn't stop the knowing cracker. And robots.txt is already cognizant of the documentation files.
Good point Earnie.
But do you need the .txt files uploaded to the web server anyway?
That depends on ones workflow. I typically have an ssh session to extract the tarball. I would find it easiest with an ftp client to just upload everything recursively but I could remove them before hand. Maybe one day drush can be helpful here to set the files permissions appropriately (if it doesn't already) as well as perhaps use sftp to upload the files for you.