Business logic is an interesting beast. Despite the fact that business logic has its dedicated layer within modern application architecture (the aptly named "business tier", a.k.a. "middle tier" or "logic layer"), developers still insist on putting business logic in all kinds of places. You can sometimes find business logic in user interfaces because people argue that in some cases it is too slow to go to the logic layer to perform validation. Other developers stuff business logic into their application's database because they can and it saves round trips to the server, which they claim is good for performance. So what is the deal? Where should logic go, and why do we have a logic layer, yet put the logic in other places?

My personal opinion is that business logic belongs in one place and one place only: The middle tier!

Logic in the User Interface

What about the argument that some situations require validation in the UI because communication with the server is too slow? Well, I do not have a problem with developers who want to add UI validation. It usually doesn't hurt to validate twice. So if you use UI validation, that doesn't mean you shouldn't have validation in the middle tier. In many programming projects, developers start out putting validation in the UI and then add it to the middle tier later. Or at least that what they plan to do, yet all too often the programming team never has time to go back and add validation to the middle tier. Yikes! I think you should start your project and put your logic in the middle tier and take those slow server round-trips initially. Especially in ASP.NET applications, you have to consider scenarios where browsers do not support client-side validation anyway. So start out with server-side validation using the middle tier, and add UI validation later. Not the other way around!

With Service Oriented Architecture (SOA), where to validate becomes a bit clearer, since you cannot drag part of the service into the UI, or at least not quite as easily. The main idea in SOA is that a service receives a well-defined message, processes it ("applies logic") and then returns another well-defined message. In that case, you would want to add a little bit of logic to the user interface that ends up using the service, since you would not want the service to raise logic exceptions all the time for reasons you could have easily handles ahead of time. However, the service itself certainly would not assume that the message sent to the service automatically is valid. Quite the contrary! The service sets the rules for what is valid, and it validates those rules using a number of techniques. Among those techniques is the service contract, but also internal business rule validation. An example that comes to mind is a service my company recently worked on that has the capability to analyze medical conditions and returns certain predictions based on that information. The service defines the messages it expects, which lays the ground rules for data it receives. However, things are not this simple! The service also has to analyze the data as a whole, figure out if the entire data series makes sense, and decide whether the history is such that a prediction can be made. Certainly, we would not want to add this logic to all user interfaces that use the service. Of course we can duplicate simple validation in the interface. Certain lab tests have to be provided for the prediction to work, and there is no need to bother the service if that field is blank in the UI. But then again, if the service was in fact called with an empty value, it would be smart enough to check for that problem and react accordingly.

Logic in the Database

Another scenario I encounter all the time is business logic in the database. SQL Server enthusiasts in particular like to use T-SQL for almost everything. I recently helped one of our developers at EPS to fix a stored procedure he had inherited from a client. Fixing a moderately complex stored procedure was what we thought the problem was, until I realized that the procedure really was a complete piece of business logic. At first, the procedure looked like it was the data update part of simple CRUD functionality. But after some closer inspection, it turned out that in addition to saving data, the procedure also ran a number of additional tests.

Initially, these "tests" seem to have been relatively simple. After saving data, the stored procedure simply looked at data to see if a series of related values formed the pattern of the highest value in the series rising 25% over the lowest value. If so, the procedure added an entry to an outgoing e-mail queue to alert interested parties. Of course, the scenario evolved over time. The first modification was to change the hard-coded value of 25% to a data-driven value. Then, the pattern grew more complicated by adding date constraints which were also data-driven. Then, it turned out that the series of data could contain the same pattern multiple times, so the logic in the procedure had to first identify an overall pattern, split the data series into multiple smaller series, and then perform the search. And finally, the overall concept evolved to compare the identified series to a completely different set of data, which required multiple queries to run until the first match was found.

All of this was in a stored procedure, and the problem was that it didn't work reliably. The original architect of this system was a DBA at heart and had his mind set on keeping all of the logic in the stored procedure, since quite a bit of data crunching was involved and he said, "We really could not afford the performance penalty of bringing the data to the middle tier."

To make a long story short: We moved all this logic from the database into the middle tier where we had the full power of a .NET language. When we did that, we immediately discovered not just what caused the bug that we originally needed to solve, but also a number of other problems! For instance, if the rise of 25% wasn't caused by the last entered value being the high value, but by modifying an existing value so it became the new low-point in the series, the stored procedure completely ignored it, thus failing to create the desired e-mail notification. Similar problems occurred in delete scenarios or in scenarios where dates were changed. In fact, it turned out that the majority of possible events were not covered at all! We fixed this and immediately added unit tests for all possible scenarios to our new business object.

But what about performance? After all, the process was very data intensive. Well, as it turns out, since the server that hosts the middle tier was on the same (fast) network as the database server, performance (even initially) was not bad at all. Not quite as fast as the solution with the Stored Procedure, but still very good. We then proceeded to optimize the new object, and as it turned out, we were able to cache quite a bit of data in the middle tier, such as data-driven configuration options, as well as the data series we had to use for comparison. We were also able to use the more powerful .NET language (C# in our case) to craft the code much more cleverly so we could avoid many trips to the database altogether. In the end, the solution we created was not just working and maintainable, but it was significantly faster than the original version!

The "Scalability-Thing"

Of course in most applications today, performance is not the main goal, simply because unless one faces an extraordinarily slow task, modern computers are fast enough. So the main issue is not how quickly your system can execute a single task, but how many of those tasks can the system perform concurrently? So far, our best answer to scalability has been multi-tier applications. The main idea is to have a large number of clients ("front end" or "UI tier") connecting to a smaller number of middle-tier components (the "business logic"), which in turn create a low number of connections to the database(s). So there is a distinct funnel-pattern here: Many clients are served by a limited number of processing objects, which create as little burden on the database as possible.

What's great about this architecture is that it can be scaled by simply adding more hardware. Is your middle-tier too slow? No problem: You can add more servers and load balance them. But what can you do if the logic processing is handled by the database? For one, that will tax the database servers more severely, causing the system to "hit the wall" much earlier. And then what do you do? How do you scale that mess? Additional hardware for the database servers can only take you so far.

The important thing to note here is that scalability is often diametrically opposed to performance. If you want to build a quick system, you could consider putting the Web server, the middle tier, and the database all on one computer (server) and then try to cache as much in memory as you possibly can. This system would consume all the available processing power on every single hit, possibly even spreading the load over multiple processors, to process the request as quickly as possible. But a system like this could not scale. It would work well for a single user, but if you want to serve a million users, you'd have to put up a million servers and you could not allow for any of the data to be shared among those users. That's probably not the result you are aiming for, nor would you be likely to find anyone who'd be willing to pay for this monstrosity.

I say again: Put your business logic in the logic layer. It works out much better all around!