In business computing we are often implementing algorithms to solve equations. Sometimes these become very complex.
Another approach which we do not employ as often is to derive an equation to replace a more complicated algortithm.
Mathemeticians do this a lot, and Physicists somewhat, but the actual computerization of their equations often turn into highly complex algorithms.
I had an interesting example of reductio ad equation the other day.
There is a game called Petals Around the Rose which is simple, fun to play, and secret. That is, the solution, once you've discovered it, should be kept secret and not revealed. In keeping with that, I will not give any spoilers here.
There is some comment that the 'smarter' you are, the longer it will take you to solve the game. There was also some angry comments about this categorization from people who solved it very quickly and thought the comment meant they were stupid or some such.
Having taken a few minutes to solve it after my wife showed it to me, I would say rather that the more 'analytical' your approach is, the longer it may take you. People who are less analytical and more intuitive may leap to the correct solution almost immediately.
But the solution is an algorithm. It is a recipe for deriving the correct answer. After divining the solution and verifying I was correct, I got to thinking about what it would take to implement a program. The algorithm is not complicated, a few lines of code, but something nagged at me.
I felt there should be a way to arrive at the correct answer in one line of code, a single equation that encapsulated the two curious 'twists' in the algorithm.
A bit of analysis later and I was able to reduce the entire algorithm into a remarkably simple equation. And I must say, that was fun.
I am a big fan of refactoring code. Bang it out, get the algorithm working albeit in an ugly manner, then refactor it over and over. Ultimaely, you should arrive at a much simplefied program which can almost not fail. That is, no matter what you throw into it, a valid result will emerge even if that valid result is the 'incorrect input' error message. No missed error conditions or aborts, no incorrect but reasonable looking results.
I call this elegant programming. For too long we have approached development with a GIGO philosophy which inherently says that we are not going to handle all the conditions. There will be some garbage that can go in and you will get garbage out.
A different approach is to write this level of elegance into our code. You may get garbage in, but you will always get a valid, understandable, result out, even if it is just an explanation that the input was garbage.
This, by the way, flies inthe face of a lot of AI programming. Learning machines in particular seem to be prone to GIGO. But people are not. We deal with garbage input all the time, in particular inconsistent or contradictory input. Better learning and reasoning architectures are needed to emulate our ability to process through the garbage.
There is a lot of good work being done on inconsistent reasoning and uncertainty in inference systems. I reviewed three papers this year on adding uncertainty factors to description logics for the semantic web. But a better way of representing such unertainty and inconsistency is needed. This may reflect back to the comment made to me at IJCAI about looking for the 'ghostly signature' in the data that represents knowledge.
At some point those efforts will result in a computational intelligence being able to intuit the solution to a problem rather than deriving it through analysis. Then we will have made progress