By Joel Spolsky
Here are a few tips for running a beta test of a software product intended for large audiences -- what I call "shrinkwrap". These apply for commercial or open source projects; I don't care whether you get paid in cash, eyeballs, or peer recognition, but I'm focused on products for lots of users, not internal IT projects.
Open betas don't work. You either get too many testers (think Netscape) in which case you can't get good data from the testers, or too few reports from the existing testers.
The best way to get a beta tester to send you feedback is to appeal to their psychological need to be consistent. You need to get them to say that they will send you feedback, or, even better, apply to be in the beta testing program. Once they have taken some positive action such as filling out an application and checking the box that says "I agree to send feedback and bug reports promptly," many more people will do so in order to be consistent.
Don't think you can get through a full beta cycle in less than eight to ten weeks. I've tried; lord help me, it just can't be done.
Don't expect to release new builds to beta testers more than once every two weeks. I've tried; lord help me, it just can't be done.
Don't plan a beta with fewer than four releases. I haven't tried that because it was so obviously not going to work!
If you add a feature, even a small one, during the beta process, the clock goes back to the beginning of the eight weeks and you need another 3-4 releases. One of the biggest mistakes I ever made was adding some whitespace-preserving code to CityDesk 2.0 towards the end of the beta cycle which had some, shall we say, unexpected side effects that a longer beta would have fleshed out.
Even if you have an application process, only about one in five people will send you feedback anyway.
We have a policy of giving a free copy of the software to anyone who sends any feedback, positive, negative, whatever. But people who don't send us anything don't get a free copy at the end of the beta.
The minimum number of serious testers you need (i.e., people who send you three page summaries of their experience) is probably about 100. If you're a one-person shop, that's all the feedback you can handle. If you have a team of testers or beta managers, try to get 100 serious testers for every employee that is available to handle feedback.
Even if you have an application process, only one out of five testers is really going to try the product and send you feedback. So, for example, if you have a QA department with 3 testers, you should approve 1500 beta applications to get 300 serious testers. Fewer than this and you won't hear everything. More than this and you'll be deluged with repeated feedback.
Most beta testers will try out the program when they first get it, and then lose interest. They are not going to be interested in retesting it every time you drop them another build unless they really start using the program every day, which is unlikely for most people. Therefore, stagger the releases. Split your beta population into four groups and each new release, add another group that gets the software, so there are new beta testers for each milestone.
Don't confuse a technical beta with a marketing beta. I've been talking about technical betas, here, in which the goal is to find bugs and get last-minute feedback. Marketing betas are prerelease versions of the software given to the press, to big customers, and to the guy who is going to write the Dummies book that has to appear on the same day as the product. With marketing betas you don't expect to get feedback (although the people who write the books are likely to give you copious feedback no matter what you do, and if you ignore it, it will be cut and pasted into their book).