??? 07/18/05 05:45 Read: times |
#97535 - definintions? Responding to: ???'s previous message |
When many of us speak of "testing" we mean acceptance testing, i.e. certification that a system is ready for delivery to the one who's paying for it. If under any circumstances specified in the test plan, it is possible to cause a malfunction under "testing," then it is necessary that the device behave in a prespecified manner. I it fails to do that, then it isn't ready for delivery.
I think what Jan refers to in his remarks is what we casually refer to as "smoke-testing," namely applying power to see what happens ... seldom advisable unless you already know what will happen. You only do that if you already know that all the interconnections are correct and that there are no "gratuitous" connections. You already know what the software "should" do, based on extensive study and simulation. You already know what the conditions to which the system under study is exposed, so you know what it will do. If it fails to do that, you've made an error. When you're "testing," you have a system that has already been thoroughly exercised at all the prespecified stresses and under all the prespecified operating conditions. You've operated the system under worst-case conditions for extended periods, and you've verified that it behaves exactly as specified in the design documents. Now you have to prove that THIS unit behaves exactly as the others, deemed to work properly, behave under the same conditions. Software bugs are not a factor in testing. They're only a factor during the debugging phase, very early in the development cycle. Testing is the LAST thing before the system is packaged for shipment to the customer. Debugging is in the first 5% of the work on the physical hardware. Testing is the last part, and often the longest. RE |