Eric Zenk is a Senior Software Engineer in the CGM group at Spatial and will be authoring occasional guest posts.
"Brian's Law: The code is always more broken then it seems to be"
Making C++ code solve math problems is one of my favorite things to do. The value of code is determined by how certain it is to work correctly and how useful a problem it solves. Working correctly includes both giving the right answer and doing so efficiently. The thrill in coding stems from knowing that what you are writing or fixing works, not just in the cases you have tested but in any other case as well. Like many worthy goals, complete certainty that a computer program is correct is unattainable.
A lot of ink (and bits) have been spilled on how to verify software is correct. There is no universal answer. Programming is hard. The remainder of this blog entry will discuss some things which I find helpful in the struggle. Since I work on ACIS and CGM, many of my comments are specifically aimed towards the mathematical aspects of understanding C++ code.
Here are some tactics:
1. Acceptance Tests: Take advantage of any data you have laying around and any models you get in bug reports. The same 3d models can be used to test many operations. The other measure of the scope of a test suite is how many different workflows it uses the functionality in; it is best if you can get as many people as possible writing tests using any workflow that seems like a customer might want to do it. This is something that is hard to do well enough: I often find customers using ACIS in ways I wouldn't expect.
2. Automatic inline testing: John Sloan has written about using contract programming and assertions to verify correctness. We have internal builds of ACIS which contain extra checking code and assertions. This allows better verification that what we are working on is correct, while not costing any extra runtime for customer builds.
I end up doing this in two ways: when initially writing new code, I put in asserts to document and test my expectations: e.g., array indices not out of bounds, factory methods create non null objects, a list sorting function should return a list that is sorted, etc. These assertions seem a bit silly but they catch many obvious problems without spending much time debugging. Asserting that a list was returned in sorted order helped me catch the fact that a comparator I wrote was returning 0 or 1 while the sorting algorithm expected +1, 0, or -1. Basic contract asserts help give confidence that you have not plugged the pieces together incorrectly.
When debugging, I use the visual breakpoints (see item 3) to form a guess about what the problem is. Once I have refined the guess to something mathematically testable, I put an assert in the code or use the debugger to test that my guess is correct. If the guess was incorrect, I try other possibilities. Once I have found the "first noticeable symptom" of the problem, I analyze what caused it, and work backwards. It takes a few iterations of modifying code and modifying the debug assertions to make the assertions correct. I usually keep whatever assertions I made debugging in the code, if they don't fail on any regression tests. As a result of using assertions for several years in the ACIS faceter, I have been able to fix a few bugs in a matter of minutes: I just load the test, it throws an assertion, then fixing the assertion fixes the bug.
3. Visual Breakpoints: The idea of a visual breakpoint is to display the model being worked on after each change so you can determine where things first when wrong. To speed the process up, we sort breakpoints into different levels of detail. For example, in the faceter there are probably about a dozen coarse visual breakpoints corresponding to major algorithm steps. There are probably fifty or more visual breakpoints for smaller steps. To diagnose a problem with the faceter, I look at the coarse breakpoints to get a rough idea where the problem is. Most of the time, the coarse breakpoints give me some ideas of what assertions to test and where. If not, I turn on more detailed breakpoints.
The following two videos were made by capturing the visual breakpoints from a test. First is the coarse breakpoints for api_align:
[swf file="http://sandbox.spatial.com/downloads/alignbreaks.flv" flashvars="autostart=false"]
Next is the coarse breakpoints and some finer grained breakpoints (which show how the mesh is build from the quad tree) of a faceter test. The code being shown for these breakpoints is the in work version of ACIS.
[swf file="http://sandbox.spatial.com/downloads/facetbreaks.flv" flashvars="autostart=false"]
4. No-op Assertions: Refactoring old code is notoriously difficult. It often happens that you need to change part of an algorithm to a better implementation. In these cases, it is important to understand when there is a behavior change and whether it is an improvement or a regression. In these cases, I usually make an intermediate, and temporary, code change where both the new and old low level function are used and their results are asserted equal. Once the methods always get the same result, or always get a better result with the new method, I can version the change and start using the new method. When accessing whether the new or old result is better, hard theory is your best friend. Understanding what properties the desired answer has allows you to judge which algorithm is better. When the older code gets a better answer, I study it further to understand how to improve the new version. In any case, having verified that low level pieces give the same answers over a significant test suite is very useful in confirming that it works.
Wrap up: Two strategies arise from all this: use the scientific method in debugging (guess what's wrong, try to falsify or confirm your guess, then iterate until you think of a good fix), instrument internal builds so that they can automatically check for low level problems you have encountered before. Do you all have any strategies you want to share?
These Stories on 3D Modeling
ACIS, 3DScript and SAT are registered trademarks of Spatial Corp.
No Comments Yet
Let us know what you think