Thep Urai - Fotolia
Some computing functions are so process-bound that they would benefit enormously if they could be shared efficiently -- among processor cores, between CPU and GPU, or even across system boundaries. Concurrency has long been supported through multithread, synchronization and locking techniques, but middleware tools are now available to make parallelism easier. To select the best, you need to understand the needs of your parallel programming application; know the two basic approaches to a parallel programming model; consider the source and direction of your applications; and, finally, learn some true parallel processing language even if you don't think you need it.
Most business processes are best assessed as a set of procedures or steps, the later ones of which depend on the outcome of the earlier ones. This linear approach characterizes almost all modern software development, and programming languages and middleware tools have evolved to optimize it.
Some applications include complex processes and calculations can be developed with traditional tools and languages, but they can also benefit from assignment of multiple parallel processing resources in the form of additional systems, processors or cores. Optimal support for these processes requires making accommodations for the synchronization of parallel processes to ensure that data isn't used before it's available or processes don't collide in resource use. Locks and semaphores are fixtures of concurrency, but they're hard to debug and limited in scalability.
What makes a parallel model?
One challenge of parallel programming is determining just what "parallel" means. Some consider any form of concurrency support to be parallel programming, and that would include grid, or cluster, computing and also multi-instance cloud and microservice applications. Others think the model should be generalized to include any form of task distribution across parallel resources. A point to consider from the start is that more parallelism options are likely to develop over time, so it's probably not wise to lock yourself into any limited models.
It turns out most opportunities to use parallel programming relate to the implementation of specific algorithms and that these algorithms can be stated in a higher-level language. If the deciphering of that language and the execution of the resulting code is managed by middleware that's inherently resource-agile and parallelism-friendly, it's possible to avoid all the traditional concurrency issues.
Even higher-level languages -- the so-called "nonprocedural" languages -- can frame the starting point for a parallel programming model. Because these languages express intent rather than process, they can be parallelized by the language processor rather than by the developer. This illustrates a general truth: The higher-level the language, the easier it is to parallel program with it -- given the proper middleware. Conversely, if a lower-level procedural programming model is used, that model will have to evolve from current concepts, including those now used to support concurrent threads, to optimally support parallel options.
Determining a model
All parallel programming should begin with the question of language, and there are a half-dozen basic models for languages defined. The first parallel model optimizes current languages and introduces concurrency concepts. The newest models, like X10, are designed specifically for generalized parallel programming. Others like Smalltalk and LINQ are nonprocedural and will be suitable for parallel programming if the correct middleware is used.
Cluster and grid computing middleware tools, and even RESTful APIs and microservices, can be used to build "parallel" applications using languages like Java or C#. MPI, PALM and Active Message middleware tools -- all open source -- can be used in most languages for concurrent development. There are also language-specific middleware tools for Java, C#, C++ and others. These models are easier to adopt, but it is difficult to make parallelism support general things like multicore or multisystem grids. Developers may also find the need for explicit synchronization of tasks difficult.
Microsoft's LINQ and PLINQ are examples of the general model for evolving traditional applications to parallel computing. LINQ lends itself to defining algorithms that represent the computational model rather than the computational method. PLINQ is a .NET middleware tool that manages parallel execution of these algorithms, so applications can be developed to separate algorithmic elements from procedural sections. A similar capability exists in Java 8, which uses Streams and a proper database model. High-Performance Fortran is an algorithmic language with parallel-process support.
All these approaches will still require developers to accommodate parallelism either by being aware of it in programming or by disguising it within algorithm handling. The X10 language, developed as an open-source project initiated by IBM, takes a different approach. It alters the concept of "procedural" programming by creating a parallel-friendly procedural programming model built around new concepts like places and asynchronous tasks. Unlike the nonprocedural approaches, X10 lets developers build and manage parallelism rather than hiding it.
Consider your goals
The number of parallel programming paths is bewildering and likely to get even more complex. For development teams, the critical consideration is the source and direction of applications. If you are doing traditional commercial programming and want to add parallel support for specialized algorithm processing, look at either concurrency middleware for your specific language or something like PLINQ. Consider the former as the base approach if you are more concerned about parallelism as cluster (grid) computing or microservices. Use the latter for general parallelism.
The direction of development is harder to factor into parallel programming planning simply because it's difficult to know what options will be available. The general hardware trend in the industry is moving toward a larger number of cores per processor and processors per system. At the higher level, the trend is toward componentization and separation of compute, storage and network functions. Even middleware is increasingly seen "as a service," with systems showcasing very lightweight operating systems and gathering in platform features such as microservices.
This basic truth makes something like X10 a logical choice. There is an Eclipse-based development environment for X10 and the framework is written in Java for execution. The X10 community has good documentation and tutorials, and it would be wise for any organization planning to use a parallel programming model to spend some time learning the language and understanding how a system-and-cluster-level accommodation to parallelism impacts language structure and middleware tools. It's always difficult to fully realize the potential of a technology by making it invisible, and parallel-specific development will eventually prove that here too.
How to fit component coupling into your middleware architecture
Learn more about the nature of modern app integration architecture
Discover how middleware can reduce app delivery times