??? 04/10/08 20:56 Read: times |
#153131 - What do you mean? Responding to: ???'s previous message |
Erik Malund said:
Richard Erlacher said:
If there were ONE and only ONE, programming adapter for ALL field-programmable devices, would it matter at all what's going on? YES!!! some implementations would have to be so convoluted that throughput would suffer greatly and being so convoluted would likely be code hungry an error prone. What do you mean by code hungry and convoluted? Why would it matter whether the ISP application is 2K bytes or 2T bytes in size? It runs on the host machine. The object file being programmed is just that, namely, an object. What is good for the goose is not always good for the gander. I don't follow the relevance, nor do I follow the reasoning of this remark. As to customer updates I can not see customers (with very few exceptions) of small embedded systems having the willingness or knowledge to attach a different interface for program updates. At a previous place of employment I found some customers to be - computerwise - 40% less intelligent than a hole in the ground. Thus if your thingy is attached to a PC via RS232, that is how you must do it, if you have a USB slot and no umbilical cord that is how you must do it.
It may be that your customers are very smart, if so, I congratulate you, I, however can not require computer literacy from my customers. Erik Fewer than 20 of the many dozens of serially interfaced MCU app's I've put out in the past 30 years have used async protocol, and fewer yet have used RS232. Most of the app's I've worked out have used some other comm's when they're necessary, but the majority don't use serial I/O, and the majority don't allow any user interaction anyway. If they do have user interaction, it is through a user interface. That's most likely to be a display and keypad or switch array, and maybe a knob or two. If a customer is going to upgrade or field-fix the firmware, he will have to connect some cable. Normally, one would want to ensure that he has no access to the connector via which he could do that. Moreover, you'd want to ensure he knows not only which application does that job, but how it works, so he won't "break" something. If he has to take off the lid and plug the field-fix plug into the field-fix socket, that would seem to be the right way to do things, so that he's not tempted to use that socket for the "wrong" purpose, whatever that might be. My microwave oven, on which I once changed a microswitch, has a couple of MCU's, and there's a dedicated test socket. I suspect that if one wanted, one could use that for a field-fix, if that were even possible. I've got probably a dozen or two computer controlled devices in the house, yet I've never once had to install a patch or "upgrade." I suppose if the code is right, it seldom needs to be "upgraded" though I can imagine situations in which it might become desirable. However, I don't see the uninitiated end-user doing that, but, rather, I see a service technician doing it. He's probably smart enough to plug the cable into the correct socket. Moreover, we're discussing programming of MCU's, in situ. Upgrades are a teensy subset of that sort of thing. I'm all for allowing post-manufacturing programming, and upgrades, and field-fixes. I don't know what percentage of all applications use async serial comm's as the user interface. I also don't know what proportion of applications even use serial I/O at all. I'm looking for a way to avoid the frequent occurrence of the boondoggles that appear on this forum all the time, namely a programmer that will program version B2 of a given part but won't program version B3 or version B1, not to mention the combination of hardware and software that might work. You're right, in that there are WAY too many comments that something used to "work" but no longer does. There are also WAY too many complaints that a given circuit, published by the manufacturer, doesn't "work." If there were One and only One host interface, using as few wires as possible, between the host computer and target device, with as simple a buffering scheme as possible were available, and capable of programming ANY new device intended to be field-programmable, irrespective of manufacturer, those complaints would go away, wouldn't they? I built my own JTAG interface in about an hour, most of which was spent on whittling the aluminum case for the little board and connectors. I figure that's about right, don't you? I couldn't get shipment on an EVK for the MCU in which I had taken an interest, so I built my own. It took about an hour to build the hardware. I downloaded the required software, that took about half an hour, including the search, and within an hour after completing the hardware, I had the MCU running on my prototype. That's the sort of throughput I look for. I don't care whether the ISP software takes ten seconds or eleven. If I were in production, I wouldn't use ISP anyway. Field fixes don't require blazing throughput either. RE |