??? 05/18/08 23:42 Modified: 05/18/08 23:45 Read: times |
#154872 - Dropping bits will not prevent flickering... Responding to: ???'s previous message |
Marshall said:
Say you have an 8bit ADC, with integration and decimiation you can get 12 bits out of this, then drop the bottom two bits if you are having "flicker issues" and you have a nice smooth reading with 10bits effective resolution from your 8bit ADC. Marshall said:
• Some noise has to be present in the signal, at least 1 LSB.
• If the noise amplitude is not sufficient, add noise to the signal. Dropping bits will not prevent flickering, even not when integrating and decimating! And adding noise is of no help either: Assume an 8bit-ADC with a static signal superimposed by 1LSB noise at its input. This is a typical situation when using a microcontroller containing a built-in ADC. Now assume that the input signal is in that "window" that gives an output signal of "128". But due to the 1LSB noise the output data stream isn't all "128", but contains a few "127" bytes. Assume two such byte streams now, one containing 247 * "128" + 9 * "127" and the other 248 * "128" + 8 * "127". This yields: First byte stream: 247 * "128" + 9 * "127" Summed up over 256 samples and divided by 256: 0111 1111. 1111 0111 Rounded to 12bits: 0111 1111. 1111 Dropping the lowest two bits: 0111 1111. 11 ============= Second byte stream: 248 * "128" + 8 * "127" Summed up over 256 samples and divided by 256: 0111 1111. 1111 1000 Rounded to 12bits: 1000 0000. 0000 Dropping the lowest two bits: 1000 0000. 00 ============= Again flickering! And, no, this has nothing to do with rounding. I can always fabricate an example, where dropping any bits finally results in flickering. Kai PS.: By the way, by oversampling you can only increase the resolution, but not the accuracy and precision! |