It's said that when you are 95% done in a software project, half the work to be done remains. The Uncertian Constants Drives Cost principle is one factor behind this observation.
Imaging you are given the task of designing a PID regulator for the speed control of an elevator. A very clever engineer at your company have written a specification and all you have to do is implement the code in soft real time. Seems easy.
Same scenario again, but with a little twist: the PID regulator have to work for different kind of elevators. Same PID algorithm, but now the P, I and D constants (parameters) have to be installation configurable for each elevator type. Still pretty easy. One additional piece of cost is that testing of the software have to be done on several different elevators (and testing can account for about 50% of a projects cost).
Let's make it even more real-world complicated: suppose, the P, I and D constants cannot be known before installation (perhaps the elevators ar custom designed). The coding of the solution will take about the same time, but you will not only have to test your solution on many elevators, for each elevator you have to experimentally find the best values. This takes significanltly more time and in fact, running elevators up and down trying to get the best P, I and D values can take a lot more time than to develop a PID regulator in software.
Unfortunately, things get even worse: your specification says that the same set of uncertain P, I and D constants will have to work on all types of elevators. That would leave you experimenting with one elevator's constants only to have to verify and/or experimient on all other elevators's constants too. And then some really clever person suggest an automatic learning algorithm. That's great, it's just that now you have to experiment with an learning algorithm (often a simple one since everybody knows complexity drives cost) making sure it finds reasonable constants for all elevators in all scnarios in a simple way!.
This experimenting is more common than you think, but we tend to ignore it because it is not intelectually challenging. It's often more a matter of judgement than science. Other examples would be automatic image processing for som web site, or curve fitting with dynamic threshold algorithms.
Any time you code contains uncertain constants, think of the Uncertian Constants Drives Cost principle and think hard about how math and science can reduce or eliminate your time experimenting. Even if complexity drives cost, substituting for uncertain constants may cost you even more.
In my opinion, yes, code should be easily configurable, but if your configuration just hides uncertain constants (perhaps the the P-value is specified to be somewhere between 1 and 10) then don't fool yourself thinking that just because the codeing is going to be easy, the final part of configuration and testing is going to be.
Comments