??? 12/09/06 18:17 Read: times Msg Score: +2 +2 Good Answer/Helpful |
#129229 - take note ... nowhere do they mention "testing" Responding to: ???'s previous message |
As I've "preached" before, testing is not something you do to a new design. Testing is a thorough and rigorous process that is performed on a thoroughly debugged and verified design prior to foisting it off on an unsuspecting client or public. Testing is the LAST step before you "ship it." That is where you subject it to every form of abuse to which you claim it to be immune.
The sense I've gotten from your query is that you don't want to TEST your design, you want to experiment on it, perhaps prior to implementing it. That's done with simulation, which this gEDA package does support. However, gEDA has nothing whatever to do with testing. Simuation is always idealized, even when it uses sophisticated strategies like monte-carlo methods, wherein component values, temperatures, voltages, perhaps timing, and other specified parameters are varied over a prespecified range in order to identify sensitivities to these variations. It takes a pretty sophisticated toolset to perform those functions. This requires extensive model sets which are not free of cost, simply because they generally have to be maintained and verified in themselves. Many years ago, I used a simulation tool called HILO, which cost on the order US$200K per seat plus about $15K per year for its maintenance. This had models for microprocessors and microcontrollers, though they were sold separately, and could simulate the behavior of these components in an environment including devices from many manufacturers, albeit all simulated. The simulation was totally in mixed-mode, and was capable, within limits of identifying "glitches" and other anomalies. However, it was both slow and costly. A simulation of a large, multiprocessor circuit with many variable parameters simulated on a substantial code set and over a wide temperature range could take several weeks. I seriously doubt that you are willing to take the time, trouble, and expense to engage a tool such as that. Minimizing the debugging of a small system involves, as Erik has repeatedly pointed out, and quite correctly, a lot of mental work. Properly designed systems with nothing that is not absolutely required, but with everything that is required, demand, first, that you establish what those requirements are, and then design proper tests, yes, TESTS, to verify that the design meets the requirements, as well as a the hardware with which to verify that the requirements are met and to what extent (failure rate). Until you have firm requirements, you can't do anything. Once you have firm requirements, you must design your code and hardware to meet them. When you've finished, you should be able to trace each bit of code or hardware back to a requirement. Once you've done that, you can try your code in a simulator to ensure it does what you intend. Then you combine the tested and presumably functional modules into an integrated package and subject it to stimuli that reflect real-world conditions in the simulator. If your simulator doesn't support that, you may want to build a prototype and investigate the system behavior there. That will require an an oscilloscope, a logic analyzer, and whatever hardware is necessary to produce the stimuli that through checkout requires. If you need finely resolved variations in timing, your stimulus generation setup will have to be capable of doing that. Once you've verified that the firmware should do exactly what you want under all forseable conditions that are covered in the requirements, you're ready to go to a production model, usually a PCB. A PCB that does everything it is supposed to do 100% of the time at room temperature in the lab is one that is ready to go to physical testing. That's where the fun begins. RE |