??? 07/11/05 12:27 Read: times |
#97008 - budget Responding to: ???'s previous message |
I agree sacrificing some for debug purposes may be necessary. But I wouldn't go with "fixed" amount, just ballance it well. First off, sacrifice even 90% of resources to debug, just to get things working. Then scale back debug when things stabilize, and finally, when everything works fine and has little/no chance of failing, leave just stubs to put it back easily in, in case it's needed, but give 99.9% of the resources to the main functionality. Of course if the chances (or cost) of failure are higher, leave more debug stuff, or integrate it with standard alert/problem/troubleshooting resources (these tend to work on similar basis).
And last but not least, feel free to scale it up to eat all the otherwise unused resources if you know you won't need them (or if you know you may need them, make it easy to scale back down). If you're sending 10 bytes of data over TCP, fill up the packet with debug stuff, CRC checksums, redundant repeats and all the almost useless stuff you MIGHT sometimes need, to the minimum 64 bytes packet size, instead of padding it with useless zeros. If you have 2 ports in 4-port derivative free and won't need them in the future, sacrifice them to sending status or other such data. Of course there are applications where squeezing the last cycle off the CPU is desirable, and "the faster, the better", but more often the device is waiting for an event in an idle loop. More often you find you have half the program memory empty than you need to size-optimize your code to fit it. If you're rich, use your riches! |