And lest we forget, an attacker can always deploy human intelligence to defeat a captcha. There are stories of attackers paying for people to crack captchas for spamming purposes without the workers realizing they're participating in illegal activities. Amazon offers a service called Mechanical Turk that tackles things like this. Amazon would strenuously object if you were to use their service for malicious purposes, and it has the downside of costing money and creating a paper trail. However, there are more erhm providers out there who would harbor no such objections.
My favorite mechanism is a hidden checkbox. Make it have a label like 'Do you agree to the terms and conditions of using our services?' perhaps even with a link to some serious looking terms. But you default it to unchecked and hide it through css: position it off page, put it in a container with a zero height or zero width, position a div over top of it with a higher z-index. Roll your own mechanism here and be creative.
The secret is that no human will see the checkbox, but most bots fill forms by inspecting the page and manipulating it directly, not through actual vision. Therefore, any form that comes in with that checkbox value set allows you to know it wasn't filled by a human. This technique is called a bot trap. The rule of thumb for the type of auto-form filling bots is that if a human has to intercede to overcome an individual site, then they've lost all the money (in the form of their time) they would have made by spreading their spam advertisements.
(The previous rule of thumb assumes you're protecting a forum or comment form. If actual money or personal information is on the line, then you need more security than just one heuristic. This is still security through obscurity, it just turns out that obscurity is enough to protect you from casual, scripted attacks. Don't deceive yourself into thinking this secures your website against all attacks.)
The other half of the secret is keeping it. Do not alter the response in any way if the box is checked. Show the same confirmation, thank you, or whatever message or page afterwards. That will prevent the bot from knowing it has been rejected.
Again though, just silently discard the request while serving the same thank you page (or introduce a delay in responding to the spam form, if you want to be vindictive - this may not keep them from overwhelming your server and it may even let them overwhelm you faster, by keeping more connections open longer. At that point, you need a hardware solution, a firewall on a load balancer setup).
There are a lot of resources out there about delaying server responses to slow down attackers, frequently in the form of brute-force password attempts. This IT Security question looks like a good starting point.
I had been thinking about updating this question for a while regarding the topic of computer vision and form submission. An article surfaced recently that pointed me to this blog post by Steve Hickson, a computer vision enthusiast. Snapchat (apparently some social media platform? I've never used it, feeling older every day...) launched a new captcha-like system where you have to identify pictures (cartoons, really) which contain a ghost. Steve proved that this doesn't verify squat about the submitter, because in typical fashion, computers are better and faster at identifying this simple type of image.
It's not hard to imagine extending a similar approach to other Captcha types. I did a search and found these links interesting as well:
Oh, and we'd hardly be complete without an obligatory XKCD comic.
Wow, thank you for the information. I have read up on ways to prevent bots and most suggest CAPTCHA's, but lately I've been reading people say CAPTCHA's are not going to be around in the near future. THis gives me information that I can research, thank you.
I wouldn't say they won't be around in the near future. In my opinion, they have enough of a downside that they're falling out of favor for widespread use. There are plenty of stories (or rants) of Captchas making it way harder for legitimate users and not even stopping 100% of bot traffic. For senstivite applications, a level of hard-ness is acceptable, but if it's a small application or especially one where you benefit more than the user from them completing your form (e.g. feedback, or a business model with heavy competition), Captchas can cause you more problems than they solve.
In case of the register form, should I apply this measure too? Patrick M, check my profile, please.
In general, yes, you want to protect every input form with some kind of human detection. Registration forms typically require email verification; not for bot detection but to verify that you have some way to contact the user. If it's registration for an email service, well, check out what gmail does when you create a new account (and they have spam detection built into the sending protocol). If it's registration for a public forum, then absolutely use as much bot detection as you can, because (in my experience) that attracts the most bots looking for easy ways to spam.
I am not a lawyer and this is not legal advice. You may still violate any number of laws for user protections and privacy even if you provide the utmost bot detection.