| ??? 01/22/04 23:46 Read: times |
#63173 - RE: reliability vs cost Responding to: ???'s previous message |
Relialibility is one of the key terms in professional electronics.
But we must distinguish. For an absolute beginner it's definitely not 'reliability' what he wants. He looks for the exciting adventure, to get this microcontroller stuff running even with the most simple tools. That's my feeling after those many postings, where absolute beginners insist to fabricate their 'applications' by the help of bread boards. Well, I remember this feeling very well, when I started with electronics, and when I demonstrated some of 'those' experts, that my Z80 application was working even without a factory fabricated printed circuit board... But for the professional 'reliability' is the basis of his daily work, the guarantee, that he will be able to earn his money also in next future. He no longer looks for the exciting adventure to get the application running with amateurish tools. His challenge goes into a different direction (I hope)... I would discuss 'reliability' on at leats three levels: 1. 'Reliability of product', 2. 'Reliability of design' and 3. 'Reliability of my work'. 1. 'Reliability of product' is the 'simple' calculation of integral failure rate of product as the sum of failure rates of individual components and procedures. Although this sounds simple, it isn't at all. There is so much confusing information in literature, even from manufacturers, that estimation of integral failure rate becomes so complicated, that many people try to avoid it totally. The result is often a heavy loss of reliability of their products and a heavy loss of reputation in the market. 2. 'Reliability of design' means, whether the product fullfills the needs of customer. It's not at all the same as 'reliability of product', because there are often situations, where the product does not fail but nevertheless will not satisfy the needs of customer. 3. 'Reliability of my work' means: Do I follow a design and fabrication procedure, which 'guarantees' the development of reliable products, where the need of additional testing and redesigning is minimal? A procedure, which 'guarantees' that applications will pass through CE testing at the first try? A procedure, which guarantees, that my customer again is satisfied with the product, and I will gain even new customers? The reliability of my work is the basis of the future of my company. If reliability of my work isn't high, then each development runs into a dangerous adventure, where I do not know actually, whether it ever will become successfull. Then I don't know, when designing phasis is terminated, how high development costs are and how our customers behaves when finding a mistake.... 'Reliability of my work' is so important, because this also decides about the development costs, which is a main factor of price calculation! What are the consequences of all this above? If I add here and there from the beginning on some more components to my design, components which are not absolutely neccessary, but help to increase 'reliability of design' and help to reduce testing phasis, then I slightly reduce 'reliability of product' due to increased number of components, but I can heaviliy increase over-all-reliability and decrease over-all costs! Think about a designer, who neglects adequate filtering and protection circuitry at some input. Ok, he will save the costs for two diodes, a resistor, may be a capacitor and a transzorb. Also he will increase 'reliability of product', because his application contains less components. Components which can never fail, of course. BUT: If the circuit catches an ESD event, his nice product is ruined!! And what about costs? If the failure of his product causes the stand-still of a whole factory, do you think, that he finally benifted from the 2 dollars he saved for the components? Even if he need not to pay for the stand-still, he will surely be pushed out of business, with a totally ruined reputation. He will never again need to think about saving costs by neglecting filtering and protection components... OK, that was an obvious example. But there a much more subtle ones. Let's think about a designer of a audio mixing console for professional studio recording: Although in application notes and datasheets power supply decoupling needs are told to be satisfied by simple 100nF capacitors, a professional designer will decouple each operational amplifier individually by a RC-filter of something between 10Ohm + 10µF/35V and 100Ohm + 100µF/25V. And this, although electrolytic capacitors are 'known' to be highly unreliable. The reason for this measure is, that the notorius instability of large arrays of OPamps involving very high gain stages and output stages driving high capacitive loads (long cables), can be heavily decreased. Consequence is an abnormously low noise level and a very low channel-to-channel-crosstalk. Or by other words 'reliability of design' is heavily increased. What about 'reliability of product'? Failure rate of a high quality electrolytic capacitor (Long Life grade) is about 20 x 10**-9/hour. Failure rate is defined at 25°C and at a certain load rating. 'Failure' is defined as certain change of capacitance, leakage current, etc. from nominal value. 'Failure' here does not mean, that capacitor is fully damaged. Even when a failure occured, capacitor will not neccessarily badly influence performance of product. E.g. if only a capacitance change of 30% has occured, it will still be helpfull for decoupling. Dangerous are only 'total failures', like 'short ciruit' conditions. Failure rate for these total failures is about ten times smaller than nominal failure rate. So, total failure rate for a todays LL-grade electrolytic capacitor is about 2 x 10**-9/hour. If the mixing console contains 1000 electrolytic capacitors, then total failure rate (refering to these electrolytics) is 1000 x (2 x 10**-9/h) = 2 x 10**-6/h. MTTF (mean time to failure) is then 1 / total failure rate = 1 / (2 x 10**-6/h) = 500khours. So, after a service life of 10**5 hours (11.4years), there's a probability of 80%, that no total failure has occured! Keep in mind, that this calculation was done for a standard LL-grade type. Today, there are LL-grade electrolytics available, which provides much lower failure rates, and this at rather low prices! But, on the other hand, choose of no-name capacitors, which do not have any specification about failure rate can result in total desaster! Some people might ask me now, how to get 20 x 10**-9/h from typical 'service life' specification? For a LL-grade electrolytic capacitor service life shall be defined as 250kh at <=40°C and at failure rate of 0.5%. Then MTTF is 250kh / 0.5% = 250kh x 200 = 50Mh. This gives a failure rate of 1 / 50Mh = 20 x 10**-9/h. What can we conclude from this simple calculation? Introduction of additional components for filtering and protection will never, never, never relevantly decrease 'reliability of product' due to increase of number of components, but always increase over-all reliability!! So, it's always of benefit to add filters and protectors! Some people doubted that specification of 2.6Mhours MTTF for Erik's switcher is correct, because it would be impossible to observe this device for so long time... Well, modern statiscs tells, that this analysis can be done very well from so called 'accelerated aging tests': You expose a certain number of samples to heavily increased temperature (150°C or even 200°C!) and count the number of failures after about 1000hours or so. Then with the help of chisquare-distribution and Arrhenius-equation (I have posted some lines about this issue here in the past) you can easily estimate the failure rate at much lower temperatures! By the way, a MTTF of 2.6Mhours is not soooo good. It represents a failure rate of 385 x 10**-9/h or 385 FIT! Ok, Erik told about a 'cheap' switcher... Kai |



