| ??? 06/25/07 20:02 Read: times Msg Score: +2 +1 Informative +1 Good Answer/Helpful |
#141242 - break it up Responding to: ???'s previous message |
Erik Malund said:
If you have a very complicated calculation, it is a good test to test it with various inputs. If that is difficult/impossible by external means, you can patch them in in the ICE and see the result. This, particularily is my way of testing if the value from e.g. a sensor is "impossible" which should give an error code, not lock the system. Fine, sounds like good use of the ICE, but is there a reason why this 'very complicated calculation' cannot be tested in isolation for correct behavior? Is it imperative that the calculation be surrounded by the rest of the program? It is very helpful in software testing to test each unit of function in a program in isolation, in much the same way every part of an airliner gets tested before anyone tries to fly the thing as a whole. Often when writing firmware for a device such as an 8052, we do not have the resources required to implement the layers of abstraction that make this decoupling easy, but that is not to say that it cannot be done. Even if the program does not fit into memory as a whole with optimization turned off, chances are each functional block will fit on its own. To do this, code needs to be separable. Do not intertwine your sensor input and actuator/whatever (?) output code with that of the calculation. Code written like that is not only a nightmare to test and debug, it is also difficult to maintain and reuse. I won't go into methods of seperating code here other than to say, there is no reason why doing so has to increase memory/processing overhead. Take this calculation, put it in an infinite loop with a break-point on every iteration. Feed it data with the ICE, run it, check the result, repeat as necessary. Erik Malund said:
Tell me ANY way to "getting useful information" other than an ICE that will in no way change the timing/location in memory of routines/variable use, of the system. Other than adding some sort of debug logging and permanently coding your routines and timing around it, I cannot. The point is, your code should not depend on location (nor, as much as possible on timing, although in embedded development this is obviously a special consideration). Every unit of code (I'm refraining from using the word function, because a unit may consist of several functions) needs to work robustly with the minimum of external properties influencing its behavior. This concept is absolutely key to testable code. Unsurprisingly, isolating blocks of code from one another, isolates blocks of bugs from one another, which on a divide and conquer basis makes them rather easier to eradicate. The biggest causes of 'dynamic' bugs are those where one part of a program interferes with another. Try to keep your units of code self contained, and test that they do keep themselves to themselves. Here's a classic example of a 'dynamic bug': An interrupt pushes data into a FIFO that occasionally overruns its buffer and writes over the next variable in memory causing strange unpredictable behavior. The overrun occurs because the main-loop which pops data out of the FIFO does not always cycle around fast enough. This may be down to a certain combination of conditional-statements inside the loop executing during the same iteration. Normally only one or two of these conditional statements execute. It might be very difficult to reproduce the fault where the main-loop slows down for too long and the FIFO overruns its buffer. This condition verges on being untestable. If the code is written such that the FIFO code can be tested in isolation, a test suite can be developed that tests it for expected behavior before it even gets used in the final program. Even if you do not write a test for overrun initially, you may theorize that an overrun is occurring in your program and write the test to provide/disprove your theory. You'll get your answer pretty quickly and have a means of reproducing the fault easily time and time again. Fix the bug and use the test again to confirm you have eradicated it. Keep the test in case you ever modify the FIFO code again and want to check for regressions. Once all your units are thoroughly tested, you can start connecting them together with some confidence that everything will work together. No doubt some interactions may cause problems, but speaking from experience these are often 'static' bugs, easily reproducible and quite easy to track down. Testing the interaction of units is an important stage of testing. Effectively you should build up a hierarchy of tests, starting with individual units, then inter-dependent units working together, and finally the whole system. Erik Malund said:
they are "structured so you CAN debug", the so called 'optimizer' is turned OFF. Its not that simple and you know it. See above. Matthew Bucknall said:
I'm not really sure what your argument is Erik. If you can't fit unoptimized code into the available space or if it is not fast enough, then you have to optimize it. Erik Malund said:
True, but NOT BY USING A STUPID "OPTIMIZER" just code it better i.e. write optimized, do not rely on a 'mechanical' thingy.. Firstly, if you're writing in C, it doesn't matter how tightly you code, turning the optimizer on will likely still make the code more efficient (size/performance depending on what you're optimizing for). In C you cannot control how registers are used/reused, nor do you have enough access to the CPU to perform the kinds of tricks you might perform when writing optimized assembler. The optimizer does have control over register usage and can also recognize many patterns in the code and use many of the same tricks you might implement by hand. They're not as stupid as you might think. There's plenty of literature on how optimizers work. Very interesting reading in my opinion. Granted, optimizers do not tend to see the grand plan that the programmer has in his/her mind, so there is no excuse for writing sloppy code and thinking the optimizer can take care of everything for you. I don't think it is particularly helpful however, for a programmer to go optimizing code at the early stages of development. I would say there is more to gain from writing clear, easy to understand code than there is in hand optimizing it from the word go. At least you can implement and prove algorithms quickly with this un-hand-optimized code and develop a test suite around it. Areas of code that do need optimizing can be refined later on and verified against the test suite for any regressions. There's no match to writing in assembly as far as memory/speed is concerned, but the advantages really end there. No point mentioning the disadvantages, you all know what they are. Matthew Bucknall said:
If you do optimize, then an ICE isn't likely to work well, so you have to look to other methods for debugging and testing. That's just a given. Erik Malund said:
give me JUST ONE that in no way whatsoever use any extra resources and do noy affect timing or anything else. See above. Your code ought not to be so sensitive. Matthew Bucknall said:
If the kind of problems you're talking about cause so much hassle, then I'm sorry to say it sounds like you're not approaching testing properly. Erik Malund said:
They do not (see above) and I do not, in any way, "approach testing" I know that the only person you can not test is yourself. The issue is NOT "hassele with testing" the issue is "fixing what the testers report to you" Again, I agree, but there is plenty you can do to make your own and their lives easier. You should always bare in mind how something might be tested, even if you do not do the testing - Much the same way when you design a device, you should always bare in mind how it's going to be used, even if you'll never use it. The problems you have suggested are not uncommon. We've probably all faced similar issues at one point or another. For that reason, a lot of time and money has been spent on researching and developing methods for testing code and destroying bugs. Its not all new age fooy, there are some useful techniques out there equally applicable to embedded development as they are to desktop/server programming. Embedded development isn't all about fitting an FFT into 1 byte of program space and taking 3 months to do so anymore. Products need to turn around faster and are expected to work reliably. That means modular, reusable, testable code, not interwoven 'works of genius' that no one will likely recognize you for. Matt. |
| Topic | Author | Date |
| stepping throught "c" code | 01/01/70 00:00 | |
| Optimizations. | 01/01/70 00:00 | |
| Optimization Level ? | 01/01/70 00:00 | |
| Yep, very likely the optimization | 01/01/70 00:00 | |
| Fly what you debug; Debug what you fly | 01/01/70 00:00 | |
| Fly what you TEST; TEST what you fly | 01/01/70 00:00 | |
| It's about time | 01/01/70 00:00 | |
| OH? | 01/01/70 00:00 | |
| Re: OH? | 01/01/70 00:00 | |
| If you have a very complicated calculation | 01/01/70 00:00 | |
| break it up | 01/01/70 00:00 | |
| I give up | 01/01/70 00:00 | |
Nevermind | 01/01/70 00:00 | |
| Debugging problems | 01/01/70 00:00 | |
| Update | 01/01/70 00:00 |



