??? 11/02/06 10:21 Read: times |
#127248 - not at all... depends on application... Responding to: ???'s previous message |
Jez Smith said:
using a checksum routine in your code space to check the integrity of your code space doesnt make a lot of sense. As usually, this depends on the circumstances. There are many "legitimate" uses of such technique. For example, you might have a "known good" section of the code memory (e.g. internal code memory of a 8x52, or a bootload section on many bigger derivatives) and want to check a section which is likely to be corrupted e.g. when ISP reprogramming goes wrong. Or, simply, this might be a requirement (legislative (I've just done this), customer's, ...) so there is no dispute whether it's reasonable or not :-( Jez said:
Also people confuse a checksum which is simply the two's compliment of the sum of all the data, and a cyclic redundancy check code which is more complex and allows a certain degree of error detection and correction. Not quite so. There are many flavours of the "simple checksum", for example with different input and output word width, with carry-recycling or not, with adding/xoring the current address... Usually, these are simple and straighforward and not too useful*, too... Then, the CRCs come in many flavours, too, and the sheer number of bits is no guarantee of their usefulness. There are "known good" polynomials, but also there are "known bad but widely used" around. CRCs have been developed mainly to checksum serial communication links where typically burst errors occur, this is not the the typical fault mechanism in memory devices so CRC might be not the best for this. However, algorithms and polynomials are widely known/used/accessible so it is easy to use it, that's why it's the "cheksum-of-choice" when no particular requirement is put on it or when one simply wans "some" degree of security but does not want to go into details of exactly how much is "some"... CRCs don't provide any means of error correction, though; for these, more complicated algorithms are available. If more paranoia is needed, a good choice is to resort to some standard form of "digest" (google e.g. for MD-5 or SHA-1, for a starter), which in fact is only a complicated checksum, with a rather wide output (hundreds of bits) and with such a complicated method of forming the checksum that it is very hard to deliberately make two different files with the same digest (checksum) - which also means that it is very unlikely that a simple error will slip through undetected. However, these tend to be computationally much more intensive... It depends on the amount of paranoia one wants to put in which is the proper choice... As for the processing time required, it depends on many factors even if it's a 32bit CRC: the polynomial itself, the amount of memory (RAM/ROM) which can be "given up" for the CRC itself (table driven implementations are considerably faster), and of course, the choice of programming language (asm vs HLL) will make a big impact here as a relatively simple (optimizable) task is repeated many many times. You can get CRC-16 from the code library on the left, that might give you a rough picture. Or, for another rough estimate, my "handcrafted" MD-5 implementation needs around 16kcycles for a 64-byte block. Jan Waclawek --- * there are 2 basic measures of the "usefulness": one is the capability of error detection (in many forms) and possibly correction (robustness against "unintended" errors, e.g. communication noise or loss of data in memory devices); the other is how easy is to modify the "source" to obtain the same "checksum" (robustness against deliberate manipulation of data) |