Image of Navigational Map linked to Home / Contents / Search Toybox

by Peter Wone - CorpTEL
Image of Line Break

Sun Makes Hardware

Client-server one year, middle-ware the next; this year it's three-tier. All this begs the perennial questions of Where will it all end? and Is it important or is it just another passing fad?

As time goes by, things become clearer even as new developments muddy the water. One thing that has come to me is that there are consumers of resources and services, and there are producers of resources and services, and that almost any software process must be several of both. This is why nothing ever seems to fit the mould properly. It's a matter of normalising the process model.

Consider the ISO network model. If you treat each layer as both a client of and a server to the layers to either side, it's a lot easier to see how to implement. Even better, if you treat each layer as an object class, you have a very tidy object model of the net (the whole Internet, for that matter) as a single distributed computer.

Which is exactly what Sun Microsystems is up to architecturally.

What Sun is up to with Java

A question you should have asked yourself is "Why is Sun spending millions on a software model and giving it away?" The answer to this very salient question is mindshare. This was (is) the basis of Microsoft's winning strategy, but now Microsoft faces the IBM problem. Its major asset, an infrastructure widely deployed and holding major mindshare - has become a liability because it prevents taking direct advantage of everything we've learned since the model was new.

Sun is giving away a small, light, orthogonal infrastructure which represents a functional superset of Windows and which is really truly platform independent.

Which brings us back to the question of why Sun would pay heaps to develop it and then give it away. The answer to this has been given publicly and repeatedly by Scott McNealy of Sun: "We're in the business of selling hardware, not software."

Traditionally Sun is famous for sexy but overpriced workstations and servers for Unix. But the inexorable machinations of the Microsoft marketing machine have been steadily eroding both the mindshare and the market share of Unix.

Suppose you were in this position and you had the technology and the capability to produce the world's first bionic Windows - smaller, faster, stronger, cheaper and totally platform independent. In particular, independent of the Intel CPU architecture in a way that Microsoft has never been able to achieve for Windows NT. (You can get NT for various CPUs, but it's almost useless because almost every third party add-on - office applications, and server extensions, and fax servers, etc - has severe availability and compatibility problems away from the Intel architecture.) And suppose you hitched your technological marvel to this year's glamour fad - the web - and managed to make the Java VM more widely deployed than Windows.

Practically everything on the planet is Java ready, or can be quickly and at trivial cost. The market's reaction to this has been to begin to produce application software for Java, because it solves both deployment and compatibility problems in one fell swoop, at basically no extra cost, and because it's glamorous right now.

I still haven't answered the question: Why free? It's a matter of acceptance and deployment. The price was that we all deployed it, at no cost to Sun. But how is that a material advantage to Sun?

Suppose you're in Sun's position and you have a really good idea for a stack-optimised CPU architecture. You write a new language reminiscent of a thoroughly established one, but expressly designed to gain maximum advantage from your whiz-bang new CPU. You manage to get a version of the language plus a CPU emulator deployed on practically every box on Earth, so that it begins to be regarded as the language for portability, and the market starts to embrace the language. Suppose you create this situation and then you release a CPU which runs this language of portability directly. Suppose the net effect of using this CPU to support the language of compatibility is 500% performance advantage over anything else in the world, plus software compatibility between desktop and laptop and even palmtop. You'll sell a lot of CPUs, methinks.

Sun makes hardware, remember?

Drawing the line

Fat client, thin server: this was the way of the late eighties and early nineties. Then thin client, fat server, in the mid nineties and even now. And always, dissatisfaction.

We traded the maintenance and deployment problems of fat clients for the bandwidth and inflexibility of custom thin clients and customised fat servers, and then we bought flexibility back by redistributing the custom thin clients every time they were invoked. Burning precious bandwidth to solve problems we shouldn't have had. After that, middle-ware was the answer. And it actually was, sort of.

Personally I perceive reality as a continuum of resources. The only things which vary are the density of resources and the trouble people have taken to impede my access to them. A server is a process, one which generates a single resource in response to demands for that resource. From the point of view of its clients, the server is the resource. I like servers. When a server is properly crafted and very robust and regular in its behaviour, the distinction between server and resource becomes academic and may be omitted from schema, reducing apparent and effective complexity. This is a Good Thing.

When people have trouble working out where the client stops and the server begins, there are usually two reasons:

Consider Microsoft SQL Server. Purported to be a relational database engine, it is actually nothing of the sort. Before you decide I'm on an anti-Microsoft kick, let me point out that no SQL server is relational, regardless of manufacturer, because SQL is not relational.

A relational database engine would generate relations. Nothing else. Just relations. You could certainly build a SQL Server around a relational database engine, but neither can be the other. The problem here is that what we have is actually two servers - or rather, we should have two. SQL can generate relations, but there is nothing compelling the author to contain himself to relational activities. You can do things like sorting and grouping. A pure relational database engine can only form products, apply theta selections to them, and project relations on this basis, forming projected relations. The direct consequence of limiting its capabilities to product-select-project (PSP) is provable correctness. Not many branches of computer science can boast that. In my less-than-humble opinion, only an idiot would compromise it. Apparently our industry is full of idiots.

Or maybe not. There are very good reasons for wanting to do certain thoroughly non-relational things at the server. Like sorting. If you want to sort and re-sort - we often do, and usually in a hurry - you need indices. At the server you have them. At the client you must either generate them or truck them all the way from the server.

I've been deliberately sloppy with words in the last paragraph, because I hope to illustrate a point. We are inclined to use the word server for the computer with which the principle data store resides. But 'server' is a role; a function. The terms client and server belong to processes servicing functions, and those functions should be atomic.

The client process is a logical server. It has a client of its own, generally a human being. The services on which it depends are typically legion, but most of them can be safely omitted from logical models, for the sake of clarity and simplicity. There is nothing requiring the entire "client process" to run on the machine in front of the user. Even now, stored procedures are in use. This is an harbinger of a structure which I hope will become ubiquitous.

Consider: the user is a client of the application. The application is a client of a stored procedure. The stored procedure is a client of a SQL Server. The SQL Server is a client of a relational database engine, or at least it would be if it were properly designed and built. The relational database engine is a client of the disk operating system, which is a client of a device driver which is a client of the BIOS which is a client of the firmware on the hard disk. Logically, stored procedures are clients of the SQL Server, and servers to a connection manager. In present systems the connection manager is part of the SQL Server even though logically it should not be.

As you can see, there is already a proliferation of servers, even if the situation is designoid rather than designed. By and large they do exactly one thing each, and it all works quite well. The trouble is that up the top of this tottering stack of technologies, things haven't had a chance to shake down properly. Eventually evolution will divide things up appropriately. But evolution is not kind to individuals, and it can be slow. Worse, it is constrained to follow paths along which each step is viable in the environment extant. Excellent solutions might never be tried, because they cannot be reached by progressions of viable intermediate states.

Allegedly we are intelligent, and capable of design. Of more than bumbling and tumbling along at the mercy of our environs. Capable of decision. I like to think that I am, and so I feel compelled to observe the lesson that may be learned: to point it out and to advocate its adoption.

Servers become simple when they perform only one function. It becomes possible to validate them. They also become small, and thus lend themselves to distribution between processors in a way which does not require the processes to be together in the same computer. It astonishes me that people don't notice that a desktop PC is a network in its own right. There's a bunch of small, simple computers, each of which is allocated one function and which provides one service, and they cooperate and communicate via a bus. It's a LAN in a box. Local, as you can see, is relative.

Would you believe I'm having trouble expressing myself? The concept I'm trying to convey is so clear to me that it's too obvious for words, which is why I appear to be wandering in circles. I'll cut to the chase.

If you chop everything up into lots of little servers :

Aye, Hamlet, and there's the rub: if you can't make up your mind where to draw the line between client and server, you probably need to draw more than one line.

Please don't abuse your Web Server

The wheel, version 8144

Something that once gave me a lot of pause was the relative merits of the Unix model versus the Windows NT model. Under Unix, the general strategy is that processes live and die on demand. Unix processes are pretty lightweight. Starting and stopping them frequently isn't a major problem the way it is under NT, where the creation and destruction of processes is expensive and slow.

That led to the ISAPI model for providing services. ISAPI pre-loads code. Threads are pre-created and kept idle in a pool. There they sit in an idle state until they are assigned to service incoming requests. I always thought this was a pretty good idea - no overhead beats low overhead. But there's no real margin in it. The web server is being used as a simple dispatcher. I mean, why? The net has long had a much simpler and lighter weight way to achieve exactly this: well known ports. These are implemented just about universally.

Web servers, for example, are universally on port 80. SMTP servers are always on port 25. Under Unix there is typically a request dispatching process called inetd, which sits listening to all the well known ports listed in its rc file (equivalent to an INI file or a registry key). It's a little like somebody calling a company switchboard. Inetd is equivalent to a switch operator. When something requests a connection to port 25, inetd "puts it through" to SMTP. Admittedly this represents something of a security risk. The business of performing security checks is left to whatever actually service the request, and this is not handled uniformly, if at all. Provision is not normally made for encrypted authentication, so even when authentication is performed, it's often done in plain text over an insecure link. (The net is composed almost exclusively of insecure links.) This is the foundation of most of the security breaches on the Internet.

By the same token, web servers typically pose the same kinds of risk. Either they remain relatively dumb document servers or they are a compromise of security.

RTFM, Netscape, Microsoft et al!

Something that annoys me: people keep asking me whether I have any experience "programming" HTML. You cannot program HTML. It's a mark-up language, for crying out loud. That's what HTML means - hypertext mark-up language. It's a subset of SGML, not a programming language. You use it to mark up documents so that

Most of the "enhancements" offered by Netscape and Microsoft fly directly in the face of the design principles that made HTML so widely applicable and therefore popular in the first place.

In pursuit of their own profit margins they are tailoring HTML to a particular display medium at the expense of its actual functions - auto indexing and medium independence.

Some brownie points to Microsoft for cascading style sheets. By the book, and before anyone else.

And the <center> tag is an atrocity. Think about how the other tags work. Hint: OO. If you can't figure it yourself in under ten minutes there's no point me explaining it, so there I leave you.



Written by: Peter Wone
May '97


Image of Line Break
[HOME] [TABLE OF CONTENTS] [SEARCH]