Download Chapter 2: Re-make or Re-model -- Cloud Developer Strategies
Ever-faster processors and cheaper blade computers set the stage for the cloud era. They also insulated developers from some performance and scalability issues. But with memory architecture in particular, would-be cloud architects may want to re-visit their designs if they want to see continued improvement gains based on parallel operations (a form of computing in which many calculations are carried out simultaneously).
While many factors have combined to make parallel cloud computing widely attainable, parallelism is still difficult. But in recent years, distributed caching and machine-level parallelism, especially in the Java space, have matured quite a bit, and the diverse offerings of assorted vendors may indicate how some cloud applications will evolve.
JavaSpaces, for example, arose out of a smorgasbord of Java community standards. Based on the notion of Tuple space (a means for dividing associative memory into units that can be accessed concurrently), the JavaSpaces framework enables scalability in parallel processing with distributed object caching.
In recent years, technologists have worked—somewhat under the radar—to commercialize this and other parallel schemes, especially for applications targeting traders on Wall Street given the real-time nature and high-computing requirements of these applications.
On Wall Street, Enterprise JavaBeans (EJBs) and associated proxies were used to effectively abstract the location of objects, but this added many layers to system architecture. That meant inefficiencies and latency. "Garbage collection"—a problem that in many ways Java solved—still created performance issues where ultrahigh performance and maximal efficiency were concerned.
Download the chapter to read the rest.