Sunday, October 25, 2009

The goals of J2EE











 < Day Day Up > 





The goals of J2EE



As the old saying goes, it helps to know where we've been (and why we got there) in order to know where next to go. I want to explain the "why" and "how" of J2EE, in order to make sure that certain concepts (like lookup, which is important in Item 16, for example) are clear.



Throughout the history of computer science, the overriding goal of any language, tool, or library has largely been to raise the level of abstraction away from details that distract us from the Real Work at hand. Consider the classic OSI seven-layer network stack, for example. It's not that when you "open a socket" you actually open something that directly pipes over to another machine; instead, that act serves as an abstraction over the four or five layers of software (and hardware, once you reach the physical layer) that each provide a certain amount of support to make all this stuff work.



In the early days of enterprise systems, layers were painfully absent�all data access was done directly against files of fixed-length records, and anything that happened to those records was your business and yours alone. No layering was present because the systems we ran on in those days didn't have much in terms of CPU cycles or memory to spare. Everything had to be as tightly focused as it could be.



As hardware capacity grew and demand for more complex processing grew with it, we found it necessary and desirable to have certain behavior guaranteed. So a layer of software was put on top of the traditional flat-file collection, and we called it a transaction processing system; it managed concurrent access to the data, making sure that the data obeyed the logical constraints we put into it via the code we wrote. Over time, this was formalized even further to include a powerful query syntax, and thus was the modern relational database, and SQL, born.



Then we started wanting to let end users work with the data stored in the database, rather than feeding to data processing clerks the stacks of paper containing the data to be entered. Not only did college students lose a viable form of employment over the summer, but a new form of programming, the client/server architecture, was born. A program executed on the client machine, responsible for presentation and data capture, and turned that into statements of work to execute against the database system. Typically this program is of the graphical user interface variety, written in some higher-level language built specifically for this purpose, customized to the particular system being developed for the company.



As the numbers of clients against these client/server systems grew, however, we began to run into a limitation: thanks to the internal processing that accompanies client action against a client/server database, the number of physical network connections (and the associated software costs) against the database have a definitive upper limit, thus placing an arbitrary cap on the number of users that can use the system at the same time. We can say that n clients was the upper limit of users against the system, where n was this maximum number of connections, and as soon as client n+1 wants to log in, we need a new database for him or her.



Even the largest Fortune 50 companies could accept this state of affairs for a short period of time because the largest number of users against an enterprise system usually didn't crest four digits; despite the costs involved in doing so, it's usually possible, though not desirable, to push a new installation out to a thousand internal clients. As soon as we started adopting the Web as a public interface to enterprise systems, however, the situation changed radically�the Web is all about extending the corporation's "reach," and that meant users could, virtually speaking, visit the corporation from all over the world, all at once. Where we used to have thousands of clients, the Web meant that now we had millions.



This exponential jump in the number of possible concurrent clients means that the old "one client, one connection" architecture isn't credibly possible anymore. A new breed of software architecture was necessary if this "bring the system to the end user via the Web" ideal was to have any chance of working.



An old maxim in computer science states, "There is no problem that cannot be solved by adding an additional layer of indirection." In this case, because client programs don't typically make use of the connection they hold to the server 100% of the time, the layer of indirection introduced was a layer of software in between the clients and the server. (Note the deliberate terminology; see Item 3 for details.) This layer of resource-managing software, after a few years of wrestling for a good name, came to be known as middleware.













     < Day Day Up > 



    No comments:

    Post a Comment