??? 02/02/07 18:42 Read: times |
#131934 - Without specifics, there's no proof Responding to: ???'s previous message |
No, I'm not happy. An apple is not an orange just because you say it is.
You never did explain what that one oddly-constructed sentence was supposed to tell us. How do you support your conjecture? Repeating a falsehood, or shouting it louder, doesn't make it true. If you'd present relevant examples that actually support your claims, they'd be effective. I've not even once seen you do that. Without example, there is no evidence. Erik Malund said:
If an ISR is 'running at the edge' as I have seen often, an occasional speed change will provide for a juicy "once a week in one of 1000 units" bug.
Of course, in a well thought design such would not happen, but the evidence in this forun is that 'testing' takes precedence over 'designing' in way too many cases. That the problem I bring up will (according to you) not affect you, does not mean that it does not exist. I'm not sure I know what you mean by "running at the edge," but I do agree that it appears that "testing' (using the term loosely) is often abused. Testing is what you do when you KNOW the design is correct. It's a different thing from "trial." A sensible design cycle involves (1) preparation of system specifications and documentation including all required functional and performance requirements (2) analysis and breakdown of system requirements, (3) design of rigorous tests to verify compliance with those requirements, (4) design of hardware/firmware to meet the tests set forth in (2), (5) realization (construction) of the design, including both hardware and firmware, incuding coding and debugging, if necessary, and concluding with (6) final testing, in which the presumed-fully-functional unit is subjected to conditions which verify the unit's behavior under all specified conditions and requirements set forth in (1) are met and that all specified behaviors when normal operating conditions are exceeded, are met as well. If you're satisfied with simple checkout, e.g. where you push all the buttons and physically examine all the indicator LED's (or whatever) to see that it seems to behave as you initially expected, then you get what you deserve. If, on the other hand, you have devised a test fixture that enables you to test all the possible combinations of push-buttons, switches, serial input, feedback, with all possible combinations of memory content and over the entire temperature and mechanical stress ranges in your requirements, then you know it's functioning as it should. If you can envision a combination of events that could occur, aside from power failure and nuclear strike, flood, etc, that cause the device to malfunction, then you've got a faulty design. If it's possible, in any numerical sense, to cause the device to fail within the specified failure rate, e.g because an externally generated asynchronous interrput occurs at an "inconvenient" time, then you have a faulty design, i.e. you have a requirement that isn't being met. Now, as for your logic ... If you make assertions that you can not or will not support with specific evidence, which you present fully, then they are just guesses. If you make such assertions KNOWING that you can not or will not provide very specific supporting evidence, then they are just fabrications. RE |