The Central-European Functional Programming School is an intensive summer school in the field of functional programming. The main goal is to bring computer scientists, researchers and graduate and PhD students together. The invited lecturers are prominent researchers in the field in Europe, and they will present state-of-the-art functional programming techniques. The event is organized by Department of Computer Science, Faculty of Mathematics and Computer Science, Babes-Bolyai University.
The iDataToolkit, which we presented at the previous CEFP in Budapest, is a purely functional toolkit for the Clean programming language with which one can create highly dynamic, interactive, thin client web applications on a high level of abstraction. With this toolkit the programming effort of the web application programmer is reduced significantly because it takes care of state handling, rendering, user interaction, and storage management automatically.
In this years summer school we will show how workflow applications for the web can be defined on a very high level of abstraction by specifying iTasks: Interactive Tasks. iTasks combinators are defined on top of the iDataToolkit offering a complete set of workflow patterns as commonly available in commercial workflow systems.
Although we do not have a graphical editor for the specification of workflows as offered by commercial workflow systems, we do offer a very concise textual, monad based, intuitive executable specification in Clean. The workflow application which is generated out of the specification is a multi-user Web enabled system that can be accessed by any browser. An advantage of our iTasks over the commercial systems is that our workflows are statically typed. The iTasks combinators furthermore offer a very high degree of reusability, thanks to polymorphism and generic overloading. Web forms are automatically generated and handled given the types on which the iTasks combinators work. The persistent storage and retrieval of information in files or in a relation database is handled automatically as well.
The implementation is peculiar making heavily use of generic programming techniques. The web is stateless, but the task evaluation tree can be reconstructed automatically such that the application can recover the results of previous tasks, re-open tasks which are not finished yet, and can determine which tasks have to be done next.
Abstract computing machines define the bare essentials of mechanically executing programs of a particular (class of) programming language(s). They describe the runtime structures which collectively represent machine states, the operators that transform machine states into others, and a control discipline that governs the sequencing of operator applications.
In the lecture we will focus on abstract machines that evaluate expressions of the pure lambda-calculus which, in a nutshell, include everything that is needed to execute functional programs. The simplicity of these machines derives from the fact that the pure lambda-calculus knows only one operation called beta-reduction which governs the substitution of variable occurrences by lambda-expressions.
Shared by all lambda-calculus machines is the idea of delayed substitutions and, related to it, the concept of environments which hold variable-expression pairs and of closures which associate expressions to the environments in which they need to be evaluated. The machines also include a structure that holds the program text to be executed, a value stack, and a dump stack which keeps track of nestings of beta-reductions.
The emphasis during the first part of the lecture will be on an abstract evaluator that reduces lambda-expressions to weak normal forms and can do so following either an applicative or a normal order control discipline. Weak normalization avoids the complexity of full beta-reductions caused by the resolution of naming conflicts in that it rules out substitutions and reductions under abstractions and, in consequence, symbolic computations involving free variables. The normal order regime, at the risk of evaluating function arguments multiple times in their places of substitution, guarantees weak normal forms if they exist, whereas the applicative order regime avoids this kind of redundancy but may also do too much and in odd cases not even terminate.
Weakly normalizing machines are the models for all implementations of functional languages, of which some come with an applicative order regime (or semantics), most prominently Scheme, others such as Haskell or Clean realize a lazy control regime that, due to rigorous sharing, evaluates arguments at most once and only to the extent necessary to reach weak normal forms.
Time permitting, we will in the second part also discuss the flavours of full normalization and describe the basics of a lazy lambda-calculus machine to this effect. It centers around a weakly normalizing machine which, whenever necessary, switches to a so-called eta-extension mode to perform substitutions under abstractions, thereupon continues with weak normalization of the abstraction body, and repeats these steps until the expression is fully normalized.
In the lab course, we will work with a reduction system that supports a fully normalizing lambda-calculus, syntax-oriented editing and interactively controlled stepwise program execution. It allows us to get acquainted in an easy-to-learn fashion with some of the flavours of programming symbolic computations with free variables, partial applications and high-level program optimizations, and of following up step by step on how a program expression transforms into its normal form.
Functional programming is often advocated as a way of improving the reliability of programs. Computerised proof-assistants are tools for creating and checking proofs. Proof-assistants are generally used for proving mathematical theorems. For proving properties about actual programs proof-assistants are not well-equipped. Functional programs have properties that make it relatively easy to use proof-assistant to proof properties about actual programs. A proof-assistant that is dedicated to the language, can be used to prove properties in a direct way with lines of reasoning that are close to the actual program. We will show the general principles of proving properties of functional programs both in theory and in practice using the proof-assistant Sparkle which is dedicated to the lazy functional language Clean. Special attention will be given to the specific aspects that arise due to laziness and due to the existence of strictness annotations.
There has been a lot of recent interest in exploiting the Curry-Howard isomorphism in type systems for more or less traditional programming languages. Types based upon the Curry-Howard isomorphism can express precise properties of programs. At the summer school I will discuss the Omega system in which the specification of designs, the definition of properties, the implementation of programs, and the checking that programs adhere to their properties, are all bundled in a coherent manner into a single unified system that appears to the user to be a programming language.
At the school, I will introduce the basic ideas from scratch, and give many, many examples that illustrate its use. The talk is not addressed at experts in type systems, but to general computer scientists who are interested in describing properties of their progams precisely. Participants will be taught to use Omega as a broad spectrum language, capable of handling abstract properties as well as implementation minutiae, where the connection between properties and programs is formal and precise.
Bio: Tim Sheard obtained its Ph.D. Computer and Information Science, at the University of Massachusetts, Amherst in 1985, and is currently Professor of Computer Science at Portland State University. His research interests include: Program Generation, Meta-Programming Systems, Theorem Proving, Logical Frameworks, Type Systems, Domain Specific Languages, and Patterns for Functional Programming. He was the General Chair of Generative Programming and Component Engineering (GPCE'04), and was organizer of the 2001 ICFP Programming Contest which attracted over 250 entrys from around the world. He is a pioneer in the area of meta-programming and is the creator of three research artifacts (MetaML, Template Haskell, and the Omega Programming Language) with broad influence on the programming language community.
Erlang/OTP is a functional programming environment designed for building concurrent and distributed fault-tolerant systems with soft real-time characteristics like telecommunication systems.
Refactoring is a programming technique for improving the design of a program without changing its behaviour. Refactoring may precede a program modification or extension, preparing the program for the modification, or may be used after finishing the work in order to bring the program into a nicer shape.
We analyse the Erlang programming language with respect to refactoring. The investigation is based on seven transformations. The main problem is to express static program transformations and statically computable conditions for such transformations in a dynamic environment like Erlang/OTP. The lecture introduces concepts like scoping and visibility with static semantical rules instead of the dynamic semantical rules found in the Erlang Reference Manual. It is shown that there are many situations when static analysis is not applicable, therefore a refactoring tool for Erlang must explicitly document which are the fully supported, partially supported, and unsupported language constructs, and which constructs are inherently hopeless to support. The details (safety condition, compensation, possible pitfalls) of the selected refactorings are presented.
The lecture introduces a refactoring tool, of which implementation is based on a relational database. The abstract syntax tree of an Erlang program and all the semantical information extracted from the program are represented as relations (tables) in the database. This representation makes it possible to formulate refactorings (both conditions and transformations) easily, at a high abstraction level.
Object-oriented programming (OOP) is a programming paradigm that uses "objects" to design applications and computer programs. It utilizes several techniques from previously established paradigms, including inheritance, modularity, polymorphism, and encapsulation. Today, many popular programming languages support OOP.
Functional programming is a programming paradigm that conceives computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with imperative programming, which emphasizes changes in state. Often considered important are, among others, higher-order and first-class functions, closures, recursion, non-strict evaluation (including, but not limited to, "laziness"). Functional programming languages, especially "purely functional" ones, have largely been emphasized in academia rather than in commercial software development.
CLOS (Common Lisp Object System) is the object-oriented programming standard for Common Lisp. It facilitates programming using a sophisticated objected-oriented model. Originally proposed as an add-on, CLOS was adopted as part of the ANSI standard for CL. CLOS is a dynamic object system which differs radically from the OOP facilities found in static languages such as C++ or Java.
At the school we will concentrate on the CLOS system. We will see how it differs from the imperative languages object-oriented facilities and how it fits together with the functional paradigm of Common Lisp.
Our intention is to introduce participants to a number of concept and techniques of the lambda-calculus.
We start with the syntax of lambda-expressions, notations and conventions, notion of free and bound variables, and the rules of substitution.
After this we deal with the operational semantics of the lambda-calculus. The main topics are: beta-conversion, alpha-conversion, avoiding alpha-conversion, nameless (de Buijn) form of expressions, equality, the definition of the lambda-calculus, eta-conversion, extensionality.
Following theme is the normal form, in this part we present Church-Rosser theorems and the reduction strategies: normal and applicative order strategy, call-by-name, call-by-need and call-by-value, lazy and eager evaluations.
We study the recursion, fixed point combinators and recursive functions. Using the notion of solvability, we define head normal form and weak head normal form and present Kleene theorem which establishes the link between lambda-definability and partial recursive functions.
We also show how constants and delta-functions may be added to the calculus.
Tired of writing boilerplate code? Tired of repeating essentially the same function definition for lots of different data types? Datatype-generic programming promises to end these coding nightmares. In these lectures, I present the key abstractions of datatype-generic programming, give several applications, and provide an elegant embedding of generic programming into Haskell. The embedding builds on recent advances in type theory: generalised algebraic data types and existential types. I hope to convince you that generic programming is useful and that you can use generic programming techniques today!