home Links Articles Books Past Meetings Photos SiteMap
The MDCFUG is sponsored by TeraTech. Visit us at www.TeraTech.com

Please send
comments/questions to

michael@
teratech.com

 
Solving the Software Paradox
Software is an integral part of our everyday lives. We wear it in our pagers, cell phones and personal CD players, transport ourselves under its control in cars, mass transit and airplanes, and spend more hours each day interacting with it playing video games, surfing or working. Yet even as today’s software is becoming more complex, and its reliable operation more critical to all facets of our lives, competitive pressures—such as doing everything in “Internet time”—result in less time to debug and test that software. That’s the paradox: How do we build the most complex systems ever conceived by man in less and less time, and be certain they are robust?

Of course, we have plenty of examples that much of the software around is not robust. As applications move from the back office to the Web, and 24x7 uptime becomes the norm for every application, failures become public. Whether they affect online auctions or air traffic control, our industry’s glitches are increasingly reported by the media. Bad publicity isn’t the only consequence. The Standish Group concluded that failures resulting from software defects cost U.S. companies $85 billion a year in lost productivity and damages.

Better design methods, new programming languages and improved tools have all helped to make today’s complex systems possible. But I have never yet seen a design tool that prevents a bad design, nor any language in which it is hard to write a buggy, hard-to-understand program.

Today, our primary defense against buggy software is testing. And therein lies the fundamental problem: Testing takes too long, is too manual-intensive, doesn’t find all the bugs, and is typically only performed at the end of the development cycle. A study by Capers Jones reports that even the best software development organizations are only 85 percent effective in removing bugs. A report by The Standish Group is even more dismal: The “typical” testing effort identifies only 30 to 40 percent of defects present.

As systems get more complex, black-box testing exercises less and less of the application, particularly code dealing with errors or other “exceptional” situations. The problem here is that QA professionals often have no idea of the internal structure of the code and don’t know that such error handling code exists. Then there’s the problem of creating the tests in the first place, which can be a significant development effort. And with the way Web-based applications seem to undergo redesigns every six months, it’s not clear how much value there is in creating automated tests that may soon be obsolete.

So, what’s the solution? First, look for alternative methods that complement conventional testing. A key observation is that there are two types of defects: “intent” errors and “effect” errors. Intent errors occur when the program doesn’t implement the specification: for example, a search function that should ignore the case of letters but doesn’t. An effect error is a logic error whereby the intended function was not implemented correctly at the code level: for example, the search function going off the end of the list.

Conventional testing is often the only way to discover intent errors. But it is very poor at finding effect errors because these tend to be “edge” or “corner” cases—at least to the code. But the existence of these cases is completely hidden from the test designer. According to Capers Jones, nearly two-thirds of all defects fall into this category.

What’s needed are ways to find these code-level errors early in the development cycle. One of the most effective ways to do this is code inspection, which is a form of static testing that analyzes source code to find effect (and some intent) errors. Studies show that inspection is more efficient and cost-effective than conventional testing.

Inspection has several other benefits: It is usable from the start of the development cycle—it can be performed as soon as a module is coded; it does not require a complete application—modules can be inspected in isolation; there are no test cases to write, debug or maintain; and there is no test environment or target hardware (important for embedded applications) required.

The location of the problem is apparent—you’re looking at the source code, so there is no time spent tracking the cause of the problem back to the source code.

So if inspection is effective, why don’t we do it more? The root cause is cultural: Programmers view writing code as intellectually challenging, while reading (inspecting) code is a waste of their skills. Similarly, project leaders and managers, challenged by ambitious product development schedules, see writing code as productive and reading code as a time waster.

One solution is to use Automated Software Inspection. It has all the benefits of manual inspections, plus it does not require dedicated (human) resources to read code; there is rapid feedback; it’s highly repeatable—an automated tool doesn’t get bored reading the same code over and over; and it avoids finger pointing among developers.

There are two ways to accomplish Automated Software Inspection: via tools or via services. Tools take time to evaluate, buy, install and learn how to properly use, configure and maintain. Because tools tend to have a high “false-positive” rate (i.e., many defects are flagged that are not “real” defects), a tools-based approach requires allocation of development resources to run the tool and filter the results.

An alternative approach is the use of Automated Software Inspection services that find defects without taxing in-house resources and return results within days. Because the service provider performs the inspection and eliminates false positives, only critical defects are in the final report. There are no tools to install, configure and maintain.

The quality of software affects the bottom line of every company. Fortunately, new development methodologies, such as Extreme Programming, put peer review squarely in the middle of the development process—every line of code is reviewed as it is written. Institutionalizing such processes, and using new technologies such as Automated Software Inspection, can take us one step closer to resolving the software paradox.

Scott Trappe is senior vice president and general manager of InstantQA Services for Reasoning Inc. He can be reached at [email protected].

 
The orignal article can be found in SD Times:Software Development.


Home | Links | Articles | Past Meetings | Meeting Photos | Site Map
About MDCFUG | Join | Mailing List |Forums | Directions |Suggestions | Quotes | Newbie Tips
TOP

Copyright © 1997-2024, Maryland Cold Fusion User Group. All rights reserved.
< >