Manage Learn to apply best practices and optimize your operations.

Deep understanding of data flow promises major boost to XML hardware

In this XML Developer Tip, Ed Tittel re-examines the future of XML hardware technology.

Those of you who have been reading my XML tips since I first started them nearly five years ago may remember when XML hardware appeared to offer significant performance and security boosts to specialized Web operations, especially those using various forms of XML-based transaction processing markup languages. Things have been quiet on that front of late -- though you can still find plenty of gear such as XML accelerators, various kinds of specialized content-based routers -- perhaps because developers already know about them. Service providers and large commercial Web sites are the only implementations of sufficient scale, scope and budget to make effective use of this stage of XML hardware technology. But in a series of articles for XML.com (and some outstanding work for SourceForge.net) Jimmy Zhang -- who created formal markup techniques, parsed technologies and has a deep understanding of how custom silicon really works together -- lays out a different kind of future...
On the hardware front, two kinds of "brains" compete to perform all kinds of tasks. On the one hand, general-purpose CPUs are extremely flexible and achieve truly general computing capabilities (and Turing machine status) by breaking all code down into a series of simple-minded instructions and then executing sequences to do a variety of tasks.

On the other hand, custom silicon usually comes in one of two forms: application-specific integrated circuits (ASICs) or field programmable logic arrays (FPLAs). ASICs compile a software program into silicon; by definition, an ASIC is very, very good and fast at doing one kind of job, but usually unable to do anything else. FPLAs work at a slightly higher level of abstraction and can configure themselves to perform specialized tasks, but can then reconfigure themselves for other tasks if needed.

Custom hardware based on ASICs or FPLAs can usually outperform CPUs when it comes to handling the tasks for which they're designed, normally by as much as an order of magnitude or better. This makes custom chips ideal for repetitive, complex tasks that meet certain other criteria: Data streams can be broken in constant-sized chunks of data for processing; backing up data streams is seldom necessary, if ever; and maintaining large numbers of complex in-memory data structures is not required. Any kind of data handling, computation or analysis that lends itself to pipelining usually fits this model very well.

The problem with most XML processing in terms of bringing it down to the chip level lies not so much with XML itself, as with the processing models that XMLs use. Case in point: DOM and SAX, which both require construction of complex in-memory data structures and arbitrary abilities to reference forward and backward inside those models. They also use dynamic data structures that invariably grow over time, and grab data in arbitrary-sized chunks that can be quite large. In a very small nutshell, this makes the problems inherent in mapping such processors onto silicon either difficult to implement, or inefficient in terms of what silicon-based processing can do best. They therefore don't really provide the kinds of performance or processing advantages usually necessary to justify the work in moving specialized processing tasks onto silicon.

This is where Jimmy Zhang's XML-based virtual token descriptor (VTD) processing approach takes off like a rocket. His model maintains a copy of the XML original document in memory (which can also be windowed through in chunks) and then proceeds to process the document by tokenizing it into constant 64-bit chunks of data. Since tasks like encryption/decryption and segmentation/reassembly of packetized data work the same way, are already well understood and are widely implemented in silicon, suddenly we've got a model for XML processing that is "good to go" -- and to achieve the kinds of performance and processing advantages that make turning software into silicon worthwhile in the first place.

I predict that this is going to lead to a renaissance of XML hardware development, and that we're going to start seeing custom chips and boards that will offer abilities to support important and specialized XML applications -- think security, EDI, e-commerce -- in all kinds of powerful, safe and incredibly speedy implementations.

I'm not sure that you'll see every markup application moving onto hardware, but I am sure that the class of applications that provide critical functions, services and data exchange are all likely candidates for such treatment. The process should take two to three years to unfold, but should also be fascinating to watch.

Here are the XML.com citations I mentioned, with yet another interesting item on XML hardware:
 

Ed Tittel is a full-time writer and trainer whose interests include XML and development topics, along with IT certification and information security topics. E-mail Ed at etittel@techtarget.com with comments, questions or suggested topics or tools for review.


 

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchSoftwareQuality

SearchAWS

SearchCloudComputing

TheServerSide.com

Close