Explicit Parallelism Programming Languages
- Occam: Occam is a concurrent programming language that builds on Communicating Sequential Processes algebra.
- Erlang: Erlang is a general-purpose, high-level, and concurrent programming language and is one of the garbage-collected runtime systems. It is designed for fault-tolerant, soft- real-time, and distributed systems.
- Parallel Virtual Machine: Parallel Virtual Machine is defined as a software tool used for parallel networking of computers.
- Ada Programming Language: Ada is an imperative, statistically typed, structured high-level programming language. It is extended from Pascal and other programming languages. It provides support for explicit concurrency, tasks and synchronous message passing, Pascal, and strong typing.
- Java Programming Language: Java is a high-level, object-oriented, class-based programming language. It allows the programmers to write the code once and execute it later anywhere. resource
Implicit Parallelism vs Explicit Parallelism
Parameter | Implicit Parallelism | Explicit Parallelism |
---|---|---|
Definition | Implicit Parallelism is defined as characteristics of parallel programming that automatically allow the compiler or interpreter to exploit the parallelism. | Explicit Parallelism is defined as characteristics of parallel programming that execute the concurrent computations with the help of primitives that are in the form of special purpose directives. |
Programming Languages used | Implicit Parallelism makes use of conventional programming languages such as C, C++, and Fortran for writing the source code. | Explicit Parallelism requires more programming efforts and makes use of programming languages such as C, C++, Fortran, and Pascal. |
Compilation of source code | In Implicit Parallelism, the source program is coded sequentially and then translated into parallel object code by a parallelizing compiler. | In Explicit parallelism, parallelism is explicitly specified in the source code itself. |
Resource Allocation | In Implicit Parallelism, parallelism is detected by the compiler and then assigns the resources to the target machine code. | In explicit parallelism, as parallelism is specified explicitly, there is no need for the compiler to detect parallelism, and resources are allocated explicitly. |
Programming efforts | Implicit parallelism requires fewer programming efforts by the programmers as compared to explicit parallelism. | Explicit parallelism requires more programming efforts by the programmers as compared to implicit parallelism. |
Resource utilization | In Implicit parallelism, resource utilization is less efficient because resource allocation is done by the compiler according to the need. | In Explicit parallelism, resource utilization is more efficient because resources are allocated explicitly and make used of resources more efficiently. |
Scalability | Implicit parallelism is less scalable due to system control. | Explicit parallelism is more scalable as it has programmer control. |
Applications | Implicit Parallelism is used in shared memory multiprocessors. | Explicit parallelism is used in loosely coupled multiprocessors. |
Difference Between Implicit Parallelism and Explicit Parallelism in Parallel Computing
Implicit Parallelism is defined as a parallelism technique where parallelism is automatically exploited by the compiler or interpreter. The objective of implicit parallelism is the parallel execution of code in the runtime environment. In implicit parallelism, parallelism is being carried out without explicitly mentioning how the computations are parallelized. The compiler assigns the resources to target machines for performing parallel operations. Implicit parallelism requires less programming effort and has applications in shared memory multiprocessors.