A Java Framework for High Level Parallel Programming using Powerlists

published in Proceedings of the 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT’17), pp. 255-262, DOI: 10.1109/PDCAT.2017.00049, December 18-20, 2017, Taipei, Taiwan.

Cite as

Full paper

A Java Framework for High Level Parallel Programming using Powerlists

Authors

Virginia Niculescu*, Frédéric Loulergue**, Darius Bufnea*, Adrian Sterca*

* Department of Computer Science, Faculty of Mathematics and Computer Science, Babeş-Bolyai University of Cluj-Napoca, Romania
** School of Informatics, Computing and Cyber Systems, Northern Arizona University, USA

Copyright

© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Abstract

Parallel programs based on the Divide&Conquer paradigm could be successfully defined in a simple way using powerlists. These parallel recursive data structures and their algebraic theories offer both a methodology to design parallel algorithms and parallel programming abstractions to ease the development of parallel applications.

The paper presents how programs based on powerlists can be implemented in Java using the JPLF framework we developed. The design of this framework is based on powerlists theory, but in the same time follows the object-oriented design principles that provide flexibility and maintainability. Examples are given and performance experiments are conducted. The results emphasise the utility and the efficiency of the framework.

Key words

Parallel recursive structures; Parallel programming; Java; Performance; Models; Framework.

BibTeX bib file

niculescu-2017.bib

References

  1. K. Achatz and W. Schulte, Architecture independent massive parallelization of divide-and-conquer algorithms, Fakultaet fuer Informatik, Universitaet Ulm, 1995.
  2. M. Aldinucci, M. Danelutto, and P. Teti, An advanced environment supporting structured parallel programming in Java, Future Generation Computer Systems, vol. 19, pp. 611-626, 2003.
  3. A. S. Anand and R. K. Shyamasundarn, Scaling computation on GPUs using powerlists, Proceedings of the 22nd International Conference on High Performance Computing Workshops (HiPCW). Oakland: IEEE, 2015, pp. 34-43.
  4. R. Bird, An introduction to the theory of lists, Logic of Programming and Calculi of Discrete Design, M. Broy, Ed. Springer, 1987, pp. 5-42.
  5. M. Cole, Algorithmic Skeletons: Structured Management of Parallel Computation. MIT Press, 1989.
  6. D. Caromel and M. Leyton, A transparent non-invasive file data model for algorithmic skeletons, Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium on, 2008, pp. 1-10.
  7. P. Ciechanowicz and H. Kuchen, Enhancing Muesli’s Data Parallel Skeletons for Multi-core Computer Architectures, IEEE International Conference on High Performance Computing and Communications (HPCC), 2010, pp. 108-113.
  8. J. W.Cooley and J. W. Tukey, An algorithm for the machine calculation of complex Fourier series, Math. Comput., vol. 19, pp. 297-301, 1965.
  9. J. Dean and S. Ghemawat, MapReduce: Simplified Data Processing on Large Clusters, OSDI. USENIX Association, 2004, pp. 137-150.
  10. U. Dastgeer and C. W. Kessler, Smart containers and skeleton programming for GPU-based systems, International Journal of Parallel Programming, vol. 44, no. 3, pp. 506-530, 2016.
  11. S. Ernsting and H. Kuchen, Algorithmic skeletons for multi-core, multi-GPU systems and clusters, Int. J. High Perform. Comput. Netw., vol. 7, no. 2, pp. 129-138, Apr. [Online]. Available: http://dx.doi.org/10.1504/IJHPCN.2012.046370
  12. E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design patterns: elements of reusable object-oriented software. Boston, MA, USA: Addison-Wesley, 1995.
  13. J. Kornerup, Data structures for parallel recursion, Ph.D. dissertation, University of Texas, 1997.
  14. M. Leyton and J. M. Piquer, Skandium: Multi-core Programming with Algorithmic Skeletons, PDP. IEEE, 2010, pp. 289-296.
  15. J. Misra, Powerlist: A structure for parallel recursion, ACM Trans. Program. Lang. Syst., vol. 16, no. 6, pp. 1737-1767, November 1994.
  16. V. Niculescu, Data-Distributions in PowerList Theory, Theoretical Aspects of Computing (ICTAC), ser. LNCS, C. B. Jones, Z. Liu, and J. Woodcock, Eds., vol. 4711. Springer, 2007, pp. 396-409.
  17. V. Niculescu, PARES – A Model for Parallel Recursive Programs, Romanian Journal of Information Science and Technology (ROMJIST), vol. 14, no. 2, pp. 159-182, 2011.
  18. D. Skillicorn and D. Talia, Models and languages for parallel computation, Computing Surveys, vol. 30, no. 2, pp. 123-169, June 1998.
  19. Commons Math – The Apache Commons Mathematics Library, accessed: 2017-05-10. [Online]. Available: http://commons.apache.org/proper/commons-math/
  20. The JavaTM Tutorials: Fork/Join, accessed: 2017-05-10. [Online]. Available: https://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html

Darius Bufnea