jdcfsu, spidering is done via links, therefore content inside a directory is only as accessable as the links that you provide. Meaning that just because a directory exists, doesnt mean it will spidered. Spiders are also subject to the same rules a user would be, ie, an area requiring authentication (wp-admin, for instance), is not any more accessable to a spider.
Restricting spiders is very simple though; this is the structure you will need to follow:
User-agent: Googlebot <-- this can also be
* to cover all spiders
A good tutorial on using robots.txt is here: