Functional applications. Functions of higher orders. See what “Functional programming language” is in other dictionaries

Functional programming combines different approaches to defining computation processes based on fairly strict abstract concepts and methods of symbolic data processing.

A feature of functional programming languages ​​is that program texts in functional programming languages ​​describe “how to solve a problem”, but do not prescribe a sequence of actions for solving it. Basic properties functional languages programming: brevity, simplicity, strong typing, modularity, the presence of deferred (lazy) calculations.

Functional programming languages ​​include: Lisp, Miranda, Gofel, ML, Standard ML, Objective CAML, F#, Scala, Pythagoras, etc.

Procedural programming languages

A procedural programming language allows the programmer to define each step in the process of solving a problem. The peculiarity of such programming languages ​​is that tasks are divided into steps and solved step by step. Using a procedural language, the programmer defines language constructs to carry out a sequence of algorithmic steps.

Procedural programming languages: Ada, Basic, C, COBOL, Pascal, PL/1, Rapier, etc.

Stack programming languages

A stack programming language is a programming language that uses the machine model of the stack to pass parameters. Stack-based programming languages: Forth, PostScript, Java, C#, etc. When using the stack as the main channel for passing parameters between words, language elements naturally form phrases (sequential chaining). This property brings these languages ​​closer to natural languages.

Aspect-oriented programming languages ​​5) Declarative programming languages ​​6) Dynamic programming languages ​​7) Educational programming languages ​​8) Interface description languages ​​9) Prototype programming languages ​​10) Object-oriented programming languages ​​11) Logical programming languages ​​12) Scripting programming languages ​​13) Esoteric programming languages


Standardization of programming languages. Programming Paradigm

The concept of a programming language is inextricably linked to its implementation. To ensure that compilation of the same program by different compilers always gives the same result, programming language standards are being developed. Standardization organizations: American National Standards Institute ANSI, Institute of Electrical and Electronics Engineers IEEE, ISO International Standards Organization.



When a language is created, a private standard is released, determined by the language developers. If a language becomes widespread, then over time different versions compilers that do not exactly follow a private standard. In most cases, there is an expansion of the initially fixed capabilities of the language. A consensus standard is being developed to bring the most popular implementations of the language into line with each other. A very important factor in the standardization of a programming language is the timeliness of the appearance of the standard - before the language becomes widespread and many incompatible implementations are created. In the process of language development, new standards may appear, reflecting modern innovations.

Programming Paradigms

Paradigm- a set of theories, standards and methods that together represent a way of organizing scientific knowledge - in other words, a way of seeing the world. By analogy, it is generally accepted that a paradigm in programming is a way of conceptualizing that defines how calculations should be carried out and how the work performed by a computer should be structured and organized.

There are several basic programming paradigms, the most important of which at the moment are the paradigms of directive, object-oriented and functional-logical programming. To support programming in accordance with a particular paradigm, special algorithmic languages ​​have been developed.

C and Pascal are examples of languages ​​designed for prescriptive programming, where the program developer uses a process-oriented model, that is, tries to create code that operates on data in the proper way. In this approach, the active principle is considered to be a program (code), which must perform all the necessary actions on passive data to achieve the desired result.


Programming technology as a process of developing software products created as an inextricable whole in the form of well-tested programs and methodological materials describing their purpose and use.

Programming- process of creation computer programs. In a broader sense: the range of activities associated with the creation and maintenance of work. state of programs - computer software.

Programming technology- a set of methods and tools used in the software development process.

Programming technology is a set of technological instructions, including:

· indication of the sequence of technological operations;

· listing the conditions under which this or that operation is performed;

· descriptions of the operations themselves, where for each operation the initial data, results, as well as instructions, regulations, standards, criteria, etc. are defined.

Modern technology programming - component approach, which involves building software from individual components - physically separate pieces of software that interact with each other through standardized binary interfaces. Currently quality criteria a software product is considered to be: − functionality; − reliability;− ease of use;− efficiency(the ratio of the level of services provided by a software product to a user under given conditions to the volume of resources used); − maintainability(characteristics of a software product that allow minimizing efforts to make changes to eliminate errors in it and to modify it in accordance with the changing needs of users); mobility(the ability of a software system to be transferred from one environment to another, in particular, from one computer to another).

An important stage in creating a software product is testing and debugging.

Debugging− this is an activity aimed at detecting and correcting errors in a software product using the processes of executing its programs.

Testing− this is the process of executing its programs on a certain set of data, for which the result of application is known in advance or the rules of behavior of these programs are known.

The following PS testing methods exist:

1) Static testing - manual testing of the program at the table.

2) Deterministic testing - with various combinations of source data.

3) Stochastic – ref. The data is selected randomly, and the output is determined by the quality of the results or an approximate estimate.


Programming styles.

A programming style is a set of programming techniques or techniques that programmers use to produce programs that are correct, efficient, easy to use, and easy to read.

There are several programming styles:

  1. Procedural programming is programming in which a program is a sequence of statements. Used in high-level languages ​​Basic, Fortran, etc.
  2. Functional programming is programming in which a program is a sequence of function calls. Used in Lisp and other languages.
  3. Logic programming – this is programming in which the program is a set of determination of relationships between objects. Used in Prolog and other languages.

Object-oriented programming– this is programming in which the basis of the program is an object that is a collection of data and rules for their transformation. Used in Turbo-Pascal, C++, etc.

Lazy calculations

IN traditional languages programming (for example, C++) calling a function causes all arguments to be evaluated. This method of calling a function is called call-by-value. If any argument was not used in the function, then the result of the calculation is lost, therefore, the calculation was done in vain. In a sense, the opposite of call-by-value is call-by-necessity. In this case, the argument is evaluated only if it is needed to calculate the result. An example of this behavior is the conjunction operator from C++ (&&), which does not evaluate the value of the second argument if the first argument is false.

If a functional language does not support lazy evaluation, then it is called strict. In fact, in such languages ​​the order of evaluation is strictly defined. Examples of strict languages ​​include Scheme, Standard ML, and Caml.

Languages ​​that use lazy evaluation are called non-strict. Haskell is a loose language, just like Gofer and Miranda, for example. Lax languages ​​are often pure.

Very often, strict languages ​​include means to support some useful features, inherent in non-strict languages, such as infinite lists. IN Standard delivery ML contains a special module to support deferred calculations. In addition, Objective Caml supports the additional reserved word lazy and a construct for lists of values ​​calculated as needed.

This section provides short description some functional programming languages ​​(very few).

§ Lisp(List processor). It is considered the first functional programming language. Untyped. It contains a lot of imperative properties, but in general it encourages the functional style of programming. Uses call-by-value for calculations. There is an object-oriented dialect of the language - CLOS.

§ ISWIM(If you See What I Mean). Functional language prototype. Developed by Landin in the 60s of the 20th century to demonstrate what a functional programming language could be. Along with the language, Landin also developed a special virtual machine for executing programs on ISWIM. This call-by-value virtual machine is called a SECD machine. The syntax of many functional languages ​​is based on the syntax of the ISWIM language. The syntax of ISWIM is similar to that of ML, especially Caml.

§ Scheme. A Lisp dialect intended for scientific research in the field of computer science. Scheme was designed with an emphasis on elegance and simplicity of the language. This makes the language much smaller than Common Lisp.


§ M.L.(Meta Language). A family of strict languages ​​with a developed polymorphic type system and parameterizable modules. ML is taught in many Western universities (some even as the first programming language).

§ Standard ML. One of the first typed functional programming languages. Contains some imperative properties such as links to changeable values and therefore is not pure. Uses call-by-value for calculations. A very interesting implementation of modularity. Powerful polymorphic type system. The latest language standard is Standard ML-97, for which there are formal mathematical definitions of the syntax, as well as the static and dynamic semantics of the language.

§ Caml Light And Objective Caml. Like Standard ML it belongs to the ML family. Objective Caml differs from Caml Light mainly in its support for classic object-oriented programming. Just like Standard ML is strict, but has some built-in support for lazy evaluation.

§ Miranda. Developed by David Turner as a standard functional language that used lazy evaluation. It has a strict polymorphic type system. Like ML, it is taught in many universities. Provided big influence to Haskell language developers.

§ Haskell. One of the most common non-strict languages. It has a very developed typing system. The module system is somewhat less well developed. The latest language standard is Haskell-98.

§ Gofer(GOod For Equational Reasoning). Simplified Haskell dialect. Designed for teaching functional programming.

§ Clean. Specifically designed for parallel and distributed programming. The syntax is similar to Haskell. Clean. Uses deferred calculations. The compiler comes with a set of libraries (I/O libraries) that allow you to program graphical user interface under Win32 or MacOS.

Recall that the most important characteristic of the functional approach is the fact that any program developed in a functional programming language can be considered as a function, the arguments of which may also be functions.

The functional approach gave rise to a whole family of languages, the ancestor of which, as already noted, was the LISP programming language. Later, in the 70s, the original version of the ML language was developed, which subsequently developed, in particular, into SML, as well as a number of other languages. Of these, perhaps the “youngest” is the Haskell language, created quite recently, in the 90s.

An important advantage of implementing functional programming languages ​​is the automated dynamic allocation of computer memory for data storage. In this case, the programmer gets rid of the need to control the data, and if necessary, can run the “garbage collection” function - clearing memory from data that the program will no longer need.

Complex programs with functional approach are built by aggregating functions. In this case, the program text is a function, some of the arguments of which can also be considered as functions. Thus, code reuse comes down to calling a previously described function, the structure of which, unlike an imperative language procedure, is mathematically transparent.

Because a function is a natural formalism for functional programming languages, implementing various aspects of function-related programming is greatly simplified. Writing becomes intuitively transparent recursive functions, i.e. functions that call themselves as an argument. The implementation of processing recursive data structures also becomes natural.

Due to the implementation of a pattern matching mechanism, functional programming languages ​​such as ML and Haskell are good for symbolic processing.

Naturally, functional programming languages ​​are not without some disadvantages.

These often include the nonlinear structure of the program and the relatively low efficiency of implementation. However, the first drawback is quite subjective, and the second has been successfully overcome by modern implementations, in particular, a number of recent SML language translators, including a compiler for the Microsoft .NET environment.

To develop professional software in functional programming languages, you need to deeply understand the nature of the function.

Note that under the term “function” in the mathematical formalization and software implementation different concepts are meant.

Thus, a mathematical function f with a domain of definition A and a range of values ​​B is the set of ordered pairs

such that if

(a,b 1) f and (a,b 2) f,

In turn, a function in a programming language is a construct of this language that describes the rules for converting an argument (the so-called actual parameter) into a result.

To formalize the concept of “function,” a mathematical theory known as lambda calculus was constructed. More precisely, this calculus should be called the calculus of lambda conversions.

Conversion refers to the transformation of calculus objects (and in programming, functions and data) from one form to another. The original task in mathematics there was a desire to simplify the form of expressions. In programming, this particular task is not so significant, although, as we will see later, the use of lambda calculus as an initial formalization can help simplify the type of program, i.e. lead to program code optimization.

In addition, conversions provide a transition to newly introduced notations and, thus, allow one to represent the subject area in a more compact or more detailed form, or, in mathematical language, change the level of abstraction in relation to subject area. This feature is also widely used by object-oriented and structured-modular programming languages ​​in the hierarchy of objects, program fragments and data structures. The interaction of application components in .NET is based on the same principle. It is in this sense that the transition to new notations is one of the most important elements of programming in general, and it is lambda calculus (unlike many other branches of mathematics) that represents an adequate way to formalize renotations.

Let us systematize the evolution of the theories underlying the modern approach to lambda calculus.

Let's consider the evolution of programming languages ​​developing within the framework of the functional approach.

Early functional programming languages, which originate from the classic LISP (LISt Processing) language, were designed to process lists, i.e. symbolic information. In this case, the main types were an atomic element and a list of atomic elements, and the main emphasis was on analyzing the contents of the list.

The development of early programming languages ​​became functional programming languages ​​with strong typing, a typical example here is the classic ML, and its direct descendant SML. In strongly typed languages, every construct (or expression) must have a type.

However, in later functional programming languages ​​there is no need for explicit type assignment, and the types of initially undefined expressions, as in SML, can be inferred (before the program is run) based on the types of the expressions associated with them.

The next step in the development of functional programming languages ​​was the support of polymorphic functions, i.e. functions with parametric arguments (analogues of a mathematical function with parameters). In particular, polymorphism is supported in SML, Miranda and Haskell.

At the present stage of development, “new generation” functional programming languages ​​have emerged with the following advanced capabilities: pattern matching (Scheme, SML, Miranda, Haskell), parametric polymorphism (SML) and so-called “lazy” (as needed) calculations (Haskell, Miranda, S.M.L.).

The family of functional programming languages ​​is quite large. This is evidenced not so much by the significant list of languages, but by the fact that many languages ​​have given rise to entire trends in programming. Let us recall that LISP gave rise to a whole family of languages: Scheme, InterLisp, COMMON Lisp, etc.

The SML programming language was no exception, which was created in the form of the ML language by R. Milner at MIT (Massachusetts Institute of Technology) and was originally intended for logical deductions, in particular, theorem proving. The language is strictly typified and lacks parametric polymorphism.

The development of “classical” ML has become three modern languages ​​with almost identical capabilities (parametric polymorphism, pattern matching, “lazy” calculations). This is the SML language, developed in the UK and the USA, CaML, created by a group of French scientists at the INRIA Institute, SML/NJ - a dialect of SML from New Jersey, and also a Russian development - mosml (the "Moscow" dialect of ML).

The proximity to mathematical formalization and the initial functional orientation caused the following advantages of the functional approach:

1. ease of testing and verification of program code based on the possibility of constructing a rigorous mathematical proof of the correctness of programs;

2. unification of the presentation of the program and data (data can be encapsulated in the program as function arguments, the designation or calculation of the function value can be done as needed);

3. safe typing: invalid data operations are excluded;

4. dynamic typing: it is possible to detect typing errors at runtime (the absence of this property in early functional programming languages ​​can lead to the computer's RAM being full);

5. independence of software implementation from machine data representation and system architecture programs (the programmer focuses on implementation details, and not on the features of machine data representation).

Note that realizing the benefits that functional programming languages ​​provide depends significantly on the choice of software and hardware platform.

If selected as software platform.NET technology, almost regardless of the hardware implementation, the programmer or software project manager additionally receives the following benefits:

1. integration of various functional programming languages ​​(while maximizing the advantages of each language, in particular, Scheme provides a pattern matching mechanism, and SML provides the ability to calculate as needed);

2. integration of various approaches to programming based on the cross-language Common Language Infrastructure, or CLI (in particular, it is possible to use C# to provide the advantages of an object-oriented approach and SML - functional, as in this course);

3. common unified typing system Common Type System, CTS (uniform and safe management of data types in the program);

4. multi-stage, flexible system for ensuring the security of program code (in particular, based on the assembly mechanism).

The main features of functional programming languages ​​that distinguish them from both imperative languages ​​and logic programming languages ​​are referential transparency and determinism. In functional languages, there is significant variation in parameters such as typing and calculation rules. In many languages, the order of evaluation is strictly defined. But sometimes strict languages ​​contain support for some useful elements inherent in non-strict languages, such as infinite lists (Standard ML has a special module to support lazy evaluation). In contrast, non-strict languages ​​allow energetic computation in some cases.

Thus, Miranda has lazy semantics, but allows you to specify strict constructors by marking the constructor arguments in a certain way.

Many modern languages functional programming languages ​​are strongly typed languages ​​(strong typing). Strong typing provides greater security. Many errors can be corrected at the compilation stage, so the debugging stage and overall program development time are reduced. Strong typing allows the compiler to generate more efficient code and thus speed up program execution. Along with this, there are functional languages ​​with dynamic typing. The data type in such languages ​​is determined during program execution (Chapter 3). They are sometimes called "typeless". Their advantages include the fact that programs written in these languages ​​have greater generality. A disadvantage can be considered the attribution of many errors to the program execution stage and the associated need to use type checking functions and a corresponding reduction in the generality of the program. Typed languages ​​contribute to the generation of more “reliable” code, while typed languages ​​produce more “general” code.

The next criterion by which functional programming languages ​​can be classified may be the presence of imperative mechanisms. At the same time, it is customary to call functional programming languages ​​that are devoid of imperative mechanisms “pure”, and those that have them are called “impure”. In the overview of functional programming languages ​​below, programming languages ​​will be referred to as "practical" and "academic". “Practical” languages ​​are understood as languages ​​that have a commercial application (real applications were developed in them or there were commercial programming systems). Academic programming languages ​​are popular in research circles and in computer education, but there are virtually no commercial applications written in such languages. They remain just a tool for conducting theoretical research in the field of computer science and are widely used in the educational process.

A list of the most popular functional programming languages ​​is given below using the following criteria: general information; typing; type of calculation; purity.

Common Lisp. A version of Lisp that can be considered a language standard since 1970, thanks to support from the Massachusetts Institute of Technology Artificial Intelligence Laboratory, typeless, energetic, with large set imperative inclusions that allow assignment and destruction of structures. Practical. Suffice it to say that the vector graphics editor AutoCAD was written in Lisp.

Scheme. A Lisp dialect intended for scientific research in computer science and teaching functional programming. Thanks to the lack of imperative inclusions, the language is much smaller than Common Lisp. Dating back to a language developed by J. McCarthy in 1962. Academic, typeless, energetic, clean.

Refal. A family of languages ​​developed by V. F. Turchin. The oldest member of this family was first realized in 1968 in Russia. It is still widely used in academic circles. Contains logic programming elements (pattern matching). Therefore, the Refal language is proposed in this textbook as a language for self-study.

Miranda. Strongly typed, supports user data types and polymorphism. Developed by Turner based on the earlier SALS and KRC languages. Has lazy semantics. No imperative inclusions.

Haskell. The development of the language occurred at the end of the last century. Widely known in academic circles. In some Western universities it is used as the main language for students to study. One of the most powerful functional languages. Lazy language. Purely functional language. Typed. Haskell is a great tool for learning and experimenting with complex functional data types. Programs written in Haskell have a significant object code size and low execution speed.

Clean. A Haskell dialect tailored to the needs of practical programming. Like Haskell, it is a lazy, purely functional language, containing type classes. But Clean also contains interesting features that have no equivalent in Haskell. For example, imperative features in Clean are based on unique types, the idea of ​​which is borrowed from linear logic. Clean contains mechanisms that can significantly improve the efficiency of programs. These mechanisms explicitly suppress lazy computation. The Clean implementation is a commercial product, but a free version is available for research and educational purposes.

ML(Meta Language). Developed by a group of programmers led by Robert Milier in the mid-70s. in Edinburgh (Edinburgh Logic for Computable Functions). The idea of ​​the language was to create a mechanism for constructing formal proofs in a system of logic for computable functions. In 1983 the language was revised to include concepts such as modules. Came to be called standard ML. ML is a strongly typed language with static type checking and applicative program execution. It has gained great popularity in research circles and in the field of computer education.

String reverse(String arg) ( if(arg.length == 0) ( return arg; ) else ( return reverse(arg.substring(1, arg.length)) + arg.substring(0, 1); ) )
This function is quite slow because it calls itself repeatedly. There may be a memory leak here because temporary objects are created many times. But this is a functional style. You may find it strange how people can program like this. Well, I was just about to tell you.

Benefits of Functional Programming

You're probably thinking that I can't make a case to justify the monstrous feature above. When I first started learning functional programming, I thought so too. I was wrong. There are very good arguments for this style. Some of them are subjective. For example, programmers claim that functional programs are easier to understand. I will not make such arguments, because everyone knows that ease of understanding is a very subjective thing. Fortunately for me, there are still a lot of objective arguments.

Unit testing

Since each symbol in FP is immutable, functions have no side effects. You cannot change the values ​​of variables, and a function cannot change a value outside its scope and thereby affect other functions (as can happen with class fields or global variables). This means that the only result of the function's execution is the return value. And the only thing that can affect the return value is the arguments passed to the function.

Here it is, the blue dream of unit testers. You can test every function in a program using only the required arguments. There is no need to call functions in in the right order or recreate the correct external state. All you need to do is pass arguments that match the edge cases. If all the functions in your program pass unit tests, then you can be much more confident in the quality of your software than in the case of imperative programming languages. In Java or C++, checking the return value is not enough - the function can change the external state, which is also subject to checking. There is no such problem in FP.

Debugging

If a functional program doesn't behave as you expect, then debugging is a piece of cake. You can always reproduce the problem because the error in the function does not depend on foreign code which was executed previously. In an imperative program, the error appears only for a while. You will have to go through a number of non-bug steps due to the fact that the function's operation depends on the external state and side effects of other functions. In FP, the situation is much simpler - if the return value is incorrect, then it will always be incorrect, no matter what pieces of code were executed before.

Once you reproduce the error, find its source - trivial task. It's even nice. As soon as you stop the program, you will have the entire call stack in front of you. You can view the arguments to each function call, just like in an imperative language. With the difference that in an imperative program this is not enough, because functions depend on the values ​​of fields, global variables and states of other classes. A function in FP depends only on its arguments, and this information is right in front of your eyes! Even more, in an imperative program, checking the return value is not enough to tell whether a piece of code behaves correctly. You'll have to hunt down dozens of objects outside the function to make sure everything works correctly. In functional programming, all you have to do is look at the return value!

As you walk through the stack, you pay attention to the arguments passed and the return values. Once the return value deviates from the norm, you drill into the function and move on. This is repeated several times until you find the source of the error!

Multithreading

The functional program is immediately ready for parallelization without any changes. You don't have to worry about deadlocks or race conditions because you don't need locks! Not a single piece of data in a functional program is changed twice by the same thread or by different ones. This means that you can easily add threads to your program without ever having to worry about the problems inherent in imperative languages.

If this is the case, then why are functional programming languages ​​so rarely used in multithreaded applications? Actually more often than you think. Ericsson has developed a functional language called Erlang for use on fault-tolerant and scalable telecommunications switches. Many noted the advantages of Erlang and began to use it. We are talking about telecommunications and traffic control systems, which are not nearly as easy to scale as typical systems, developed on Wall Street. In fact, systems written in Erlang are not as scalable and reliable as Java systems. Erlang systems are simply super reliable.

The story of multithreading does not end there. If you're writing an essentially single-threaded application, the compiler can still optimize the functional program to use multiple CPUs. Let's look at the next piece of code.


A functional language compiler can analyze the code, classify the functions that produce lines s1 and s2 as time-consuming functions, and run them in parallel. This is impossible to do in an imperative language, because each function can change external state and the code immediately following the call can depend on it. In FP automatic analysis functions and finding suitable candidates for parallelization is a trivial task, like automatic inline! In this sense, the functional programming style is future-proof. Hardware developers can no longer make the CPU work faster. Instead, they are increasing the number of cores and claiming a fourfold increase in multi-threaded computing speed. Of course, they forget to say just in time that your new processor will only show gains in programs designed with parallelization in mind. There are very few of these among imperative software. But 100% of functional programs are ready for multithreading out of the box.

Hot Deployment

In the old days to install Windows updates I had to restart the computer. Many times. After installing a new version of the media player. There have been significant changes in Windows XP, but the situation is still far from ideal (I ran Windows Update at work today and now the annoying reminder won't leave me alone until I reboot). Unix systems had a better update model. To install updates, I had to stop some components, but not the entire OS. Although the situation looks better, it is still not acceptable for a large class of server applications. Telecommunications systems must be turned on 100% of the time, because if an update prevents a person from calling an ambulance, lives can be lost. Wall Street firms also don't want to shut down servers over the weekend to install updates.

Ideally, you need to update all the necessary sections of the code without stopping the system in principle. In the imperative world this is impossible [transl. in Smalltalk it is very possible]. Imagine unloading a Java class on the fly and reloading a new version. If we did this, then all instances of the class would become non-working, because the state that they stored would be lost. We would have to write tricky code for version control. We would have to serialize all created instances of the class, then destroy them, create instances of a new class, try to load the serialized data in the hope that the migration will go smoothly and the new instances will be valid. And besides, the migration code must be written manually each time. And the migration code must preserve links between objects. In theory it’s all right, but in practice it will never work.

In a functional program, all state is stored on the stack as function arguments. This makes hot deployment much easier! Essentially, all you need to do is calculate the difference between the code on the production server and the new version, and install the changes in the code. The rest will be done automatically by the language tools! If you think this is science fiction, think twice. Engineers who work with Erlang have been updating their systems for years without stopping their work.

Machine Assisted Proofs and Optimizations

Another interesting property of functional programming languages ​​is that they can be learned from a mathematical point of view. Since a functional language is an implementation of a formal system, all mathematical operations used on paper can be applied to functional programs. The compiler, for example, can convert a piece of code into an equivalent, but more efficient piece, while mathematically justifying their equivalence. Relational databases have been making these kinds of optimizations for years. There is nothing stopping you from using similar techniques in regular programs.

Additionally, you can use mathematics to prove the correctness of sections of your programs. If you want, you can write tools that analyze your code and automatically create Unit tests for edge cases! This functionality is invaluable for rock solid systems. When developing pacemaker monitoring or air traffic management systems, such tools are a must. If your developments are not in the critical area important applications, then automated verification tools will still give you a gigantic advantage over your competitors.

Higher Order Functions

Remember, when I talked about the benefits of FP, I noted that “everything looks nice, but it’s useless if I have to write in a clumsy language in which everything is final.” This was a misconception. The use of final everywhere looks clumsy only in imperative programming languages ​​such as Java. Functional programming languages ​​operate with other kinds of abstractions, ones that make you forget that you ever liked changing variables. One such tool is higher order functions.

In FP, a function is not the same as a function in Java or C. It is a superset - they can do the same thing as Java functions and even more. Let's say we have a function in C:

Int add(int i, int j) ( return i + j; )
In FP this is not the same as a regular C function. Let's extend our Java compiler to support this notation. The compiler must turn the function declaration into the following Java code (remember that there is an implicit final everywhere):

Class add_function_t ( int add(int i, int j) ( return i + j; ) ) add_function_t add = new add_function_t();
The add symbol is not really a function. This is a small class with one method. Now we can pass add as an argument to other functions. We can write it into another symbol. We can create add_function_t instances at runtime and they will be garbage collected when they are no longer needed. Functions become basic objects, just like numbers and strings. Functions that operate on functions (take them as arguments) are called higher-order functions. Don't let this scare you. The concept of higher-order functions is almost the same as the concept of Java classes that operate on each other (we can pass classes to other classes). We can call them “higher-order classes,” but no one bothers with that because Java doesn't have a rigorous academic community behind it.

How and when should you use higher order functions? I'm glad you asked. You write your program as one big monolithic piece of code without worrying about class hierarchy. If you see that some section of code is repeated in different places, you put it in a separate function (fortunately, schools still teach how to do this). If you notice that some of the logic in your function should behave differently in some situations, then you create a higher order function. Confused? Here real example from my work.

Let's assume that we have a piece of Java code that receives a message, transforms it in various ways and transmits it to another server.

Void handleMessage(Message msg) ( // ... msg.setClientCode("ABCD_123"); // ... sendMessage(msg); ) // ... )
Now imagine that the system has changed, and now you need to distribute messages between two servers instead of one. Everything remains unchanged except the client code - the second server wants to receive this code in a different format. How can we deal with this situation? We can check where the message should go and, depending on this, set correct code client. For example like this:

Class MessageHandler ( void handleMessage(Message msg) ( // ... if(msg.getDestination().equals("server1") ( msg.setClientCode("ABCD_123"); ) else ( msg.setClientCode("123_ABC") ; ) // ... sendMessage(msg ) // ... )
But this approach does not scale well. As new servers are added, the feature will grow linearly and making changes will become a nightmare. Object oriented approach consists of isolating a common superclass MessageHandler and subclassing the logic for defining the client code:

Abstract class MessageHandler ( void handleMessage(Message msg) ( // ... msg.setClientCode(getClientCode()); // ... sendMessage(msg); ) abstract String getClientCode(); // ... ) class MessageHandlerOne extends MessageHandler ( String getClientCode() ( return "ABCD_123"; ) ) class MessageHandlerTwo extends MessageHandler ( String getClientCode() ( return "123_ABCD"; ) )
Now for each server we can create an instance of the corresponding class. Adding new servers becomes more convenient. But there is a lot of text for such a small change. I had to create two new types just to add support for different client code! Now we’ll do the same in our language with support for higher-order functions:

Class MessageHandler ( void handleMessage(Message msg, Function getClientCode) ( // ... Message msg1 = msg.setClientCode(getClientCode()); // ... sendMessage(msg1); ) // ... ) String getClientCodeOne( ) ( return "ABCD_123"; ) String getClientCodeTwo() ( return "123_ABCD"; ) MessageHandler handler = new MessageHandler(); handler.handleMessage(someMsg, getClientCodeOne);
We did not create new types or complicate the class hierarchy. We simply passed the function as a parameter. We achieved the same effect as in the object-oriented counterpart, but with some advantages. We have not tied ourselves to any class hierarchy: we can pass any other functions to runtime and change them at any time, while maintaining a high level of modularity with less code. Essentially, the compiler created object-oriented glue for us! At the same time, all other advantages of FP are preserved. Of course, the abstractions offered by functional languages ​​do not end there. Higher order functions are just the beginning

Currying

Most people I meet have read the book Design Patterns by the Gang of Four. Any self-respecting programmer will say that the book is not tied to any specific programming language, and the patterns are applicable to software development in general. This is a noble statement. But unfortunately it is far from the truth.

Functional languages ​​are incredibly expressive. In a functional language, you won't need design patterns because the language is so high-level that you can easily start programming in concepts that eliminate all known programming patterns. One of these patterns is Adapter (how is it different from Facade? It looks like someone needed to stamp more pages to fulfill the terms of the contract). This pattern turns out to be unnecessary if the language supports currying.

The Adapter pattern is most often applied to the "standard" unit of abstraction in Java - the class. In functional languages, the pattern is applied to functions. The pattern takes an interface and transforms it into another interface according to certain requirements. Here is an example of the Adapter pattern:

Int pow(int i, int j); int square(int i) ( return pow(i, 2); )
This code adapts the interface of a function that raises a number to an arbitrary power to the interface of a function that squares a number. In academic circles, this simple technique is called currying (after the logician Haskell Curry, who performed a series of mathematical tricks to formalize it all). Since functions are used everywhere as arguments in FP, currying is used very often to bring functions to the interface needed in one place or another. Since the interface of a function is its arguments, currying is used to reduce the number of arguments (as in the example above).

This tool is built into functional languages. You don't have to manually create a function that wraps the original. A functional language will do everything for you. As usual, let's expand our language by adding currying.

Square = int pow(int i, 2);
With this line we automatically create a squaring function with one argument. The new function will call the pow function, substituting 2 as the second argument. From a Java perspective, it would look like this:

Class square_function_t ( int square(int i) ( return pow(i, 2); ) ) square_function_t square = new square_function_t();
As you can see, we simply wrote a wrapper over the original function. In FP, currying is just a simple and convenient way to create wrappers. You focus on the task, and the compiler writes the necessary code for you! It's very simple, and happens every time you want to use the Adapter (wrapper) pattern.

Lazy evaluation

Lazy (or deferred) evaluation is an interesting technique that becomes possible once you understand the functional philosophy. We have already seen the following piece of code when talking about multithreading:

String s1 = somewhatLongOperation1(); String s2 = somewhatLongOperation2(); String s3 = concatenate(s1, s2);
In imperative programming languages, the order of computation does not raise any questions. Since each function can affect or depend on external state, it is necessary to maintain a clear order of calls: first somewhatLongOperation1 , then somewhatLongOperation2 , and concatenate at the end. But not everything is so simple in functional languages.

As we saw earlier, somewhatLongOperation1 and somewhatLongOperation2 can be run simultaneously because the functions are guaranteed not to affect or depend on the global state. But what if we don't want to execute them simultaneously, should we call them sequentially? The answer is no. These calculations should only be run if any other function depends on s1 and s2 . We don't even need to execute them until we need them inside concatenate . If instead of concatenate we substitute a function that, depending on the condition, uses one of the two arguments, then the second argument may not even be calculated! Haskell is an example of a lazy evaluation language. Haskell doesn't guarantee any call order (at all!) because Haskell executes code as needed.

Lazy computing has a number of advantages as well as some disadvantages. In the next section we will discuss the advantages and I will explain how to live with the disadvantages.

Optimization

Lazy evaluation provides enormous potential for optimizations. A lazy compiler looks at code the same way a mathematician looks at algebraic expressions - it can undo things, cancel execution of certain sections of code, change the order of calls for greater efficiency, even arrange code in such a way as to reduce the number of errors, while ensuring the integrity of the program. Exactly this big advantage when describing a program using strict formal primitives - the code obeys mathematical laws and can be studied mathematical methods.

Abstracting Control Structures

Lazy computing provides such a high level of abstraction that amazing things become possible. For example, imagine implementing the following control structure:

Unless(stock.isEuropean()) ( sendToSEC(stock); )
We want the sendToSEC function to be executed only if the stock is not European. How can you implement unless ? Without lazy evaluation, we would need a macro system, but in languages ​​like Haskell this is not necessary. We can declare unless as a function!

Void unless(boolean condition, List code) ( if(!condition) code; )
Note that code will not be executed if condition == true . In strict languages, this behavior cannot be repeated because the arguments will be evaluated before unless is called.

Infinite Data Structures

Lazy languages ​​allow you to create infinite data structures, which are much more difficult to create in strict languages. - just not in Python]. For example, imagine the Fibonacci sequence. Obviously, we cannot compute an infinite list in a finite time and still store it in memory. In strict languages ​​like Java, we would simply write a function that returns an arbitrary member of the sequence. In languages ​​like Haskell, we can abstract away and simply declare an infinite list of Fibonacci numbers. Since the language is lazy, only the necessary parts of the list that are actually used in the program will be calculated. This allows you to abstract from large number problems and look at them from a higher level (for example, you can use functions for processing lists on infinite sequences).

Flaws

Of course, free cheese only comes in a mousetrap. Lazy calculations come with a number of disadvantages. These are mainly shortcomings from laziness. In reality, direct order of calculations is very often needed. Take for example the following code:


In a lazy language, no one guarantees that the first line will be executed before the second! This means that we cannot do I/O, we cannot use native functions normally (after all, they need to be called in in a certain order, to take into account their side effects), and cannot interact with the outside world! If we introduce a mechanism for ordering the execution of code, we will lose the advantage of mathematical rigor of the code (and then we will lose all the benefits of functional programming). Fortunately, all is not lost. Mathematicians got to work and came up with several techniques to ensure that the instructions were executed in the correct order without losing the functional spirit. We got the best of both worlds! Such techniques include continuations, monads, and uniqueness typing. In this article we will work with continuations, and leave monads and unambiguous typing until next time. It’s interesting that continuations are a very useful thing, which is used not only to specify a strict order of calculations. We'll talk about this too.

Sequels

Continuations play the same role in programming as The Da Vinci Code played in human history: the astonishing revelation of humanity's greatest mystery. Well, maybe not exactly like that, but they definitely tear off the covers, just as you learned to take the root of -1 back in the day.

When we looked at functions, we only learned half the truth, because we assumed that a function returns a value to the function that calls it. In this sense, continuation is a generalization of functions. A function does not have to return control to the place from which it was called, but can return to any place in the program. "Continue" is a parameter we can pass to a function to indicate a return point. It sounds much scarier than it actually is. Let's take a look at the following code:

Int i = add(5, 10); int j = square(i);
The add function returns the number 15, which is written to i at the location where the function was called. The value of i is then used when calling square. Note that a lazy compiler cannot change the order of calculations, because the second line depends on the result of the first. We can rewrite this code using a Continuation Passing Style (CPS), where add returns a value to the square function.

Int j = add(5, 10, square);
In this case, add receives an additional argument - a function that will be called after add finishes running. In both examples j will be equal to 225.

This is the first technique that allows you to specify the order in which two expressions are executed. Let's return to our I/O example.

System.out.println("Please enter your name: "); System.in.readLine();
These two lines are independent of each other, and the compiler is free to change their order as it wishes. But if we rewrite it in CPS, we will thereby add the necessary dependency, and the compiler will have to carry out calculations one after another!

System.out.println("Please enter your name: ", System.in.readLine);
In this case, println would have to call readLine , passing it its result, and return the result of readLine at the end. In this form, we can be sure that these functions will be called in turn, and that readLine will be called at all (after all, the compiler expects to receive the result of the last operation). In case of Java, println returns void. But if some abstract value (which could serve as an argument to readLine) was returned, that would solve our problem! Of course, building such chains of functions greatly impairs the readability of the code, but this can be dealt with. We can add syntactic features to our language that will allow us to write expressions as usual, and the compiler will automatically chain calculations. Now we can carry out calculations in any order without losing the advantages of FP (including the ability to study the program using mathematical methods)! If this is confusing, remember that functions are just instances of a class with a single member. Rewrite our example so that println and readLine are instances of classes, this will make it clearer to you.

But the benefits of sequels don’t end there. We can write the entire program using CPS, so that each function is called with additional parameter, a continuation into which the result is passed. In principle, any program can be translated into CPS if you think of each function as special case continuations. This conversion can be done automatically (in fact, many compilers do so).

As soon as we convert the program to CPS form, it becomes clear that each instruction has a continuation, a function to which the result will be passed, which in regular program would be a point of challenge. Let's take any instruction from the last example, for example add(5,10) . In a program written in CPS form, it is clear what will be the continuation - this is the function that add will call upon completion of work. But what will be the continuation in the case of a non-CPS program? We, of course, can convert the program to CPS, but is this necessary?

It turns out that this is not necessary. Take a close look at our CPS conversion. If you start writing a compiler for it, you'll find that the CPS version doesn't need a stack! Functions never return anything, in the traditional sense of the word “return”, they simply call another function, substituting the result of the calculation. There is no need to push arguments onto the stack before each call and then pop them back. We can simply store the arguments in some fixed memory location and use jump instead of a normal call. We don't need to store the original arguments, because they will never be needed again, because functions don't return anything!

Thus, CPS-style programs do not need a stack, but contain an additional argument, in the form of a function, to be called. Non-CPS style programs do not have an extra argument, but use a stack. What is stored on the stack? Just arguments and a pointer to the memory location where the function should return. Well, have you already guessed? The stack stores information about continuations! A pointer to a return point on the stack is the same as the function to be called in CPS programs! To find out what the continuation of add(5,10) is, just take the return point from the stack.

It was not difficult. A continuation and a pointer to a return point are really the same thing, only the continuation is specified explicitly, and therefore it may differ from the place where the function was called. If you remember that a continuation is a function, and a function in our language is compiled into an instance of a class, then you will understand that a pointer to a return point on the stack and a pointer to a continuation are actually the same thing, because our function (as an instance of a class ) is just a pointer. This means that at any point in time in your program you can request the current continuation (essentially information from the stack).

Okay, now we understand what the current continuation is. What does it mean? If we take the current continuation and save it somewhere, we thereby save the current state of the program - we freeze it. This is similar to OS hibernation mode. The continuation object stores the information needed to resume program execution from the point at which the continuation object was requested. operating system does this to your programs all the time when it switches context between threads. The only difference is that everything is under the control of the OS. If you request a continuation object (in Scheme this is done by calling the call-with-current-continuation function), then you will receive an object with the current continuation - the stack (or in the case of CPS, the next call function). You can save this object to a variable (or even to disk). If you decide to "restart" the program with this continuation, then the state of your program is "transformed" to the state at the time the continuation object was taken. This is the same as switching to a suspended thread, or waking up the OS after hibernation. With the exception that you can do this many times in a row. After the OS wakes up, the hibernation information is destroyed. If this was not done, then it would be possible to restore the OS state from the same point. It's almost like traveling through time. With sequels you can afford it!

In what situations will continuations be useful? Usually if you are trying to emulate state in systems that are inherently devoid of state. An excellent use for continuations has been found in Web applications (for example, in the Seaside framework for the Smalltalk language). Microsoft's ASP.NET goes to great lengths to save state between requests to make your life easier. If C# supported continuations, the complexity of ASP.NET could be halved by simply saving the continuation and restoring it on the next request. From a Web programmer's point of view, there would not be a single break - the program would continue its work from the next line! Continuations are an incredibly useful abstraction for solving some problems. With more and more traditional fat clients moving to the Web, the importance of continuations will only grow over time.

Pattern matching

Pattern matching is not that new or innovative idea. In fact, it has little to do with functional programming. The only reason it is often associated with FP is that for some time now functional languages ​​have pattern matching, but imperative languages ​​do not.

Let's start our introduction to Pattern matching with the following example. Here is a function for calculating Fibonacci numbers in Java:

Int fib(int n) ( if(n == 0) return 1; if(n == 1) return 1; return fib(n - 2) + fib(n - 1); )
And here is an example in a Java-like language with support for Pattern matching

Int fib(0) ( return 1; ) int fib(1) ( return 1; ) int fib(int n) ( return fib(n - 2) + fib(n - 1); )
What is the difference? The compiler implements the branching for us.

Just think, it's very important! It's really not that important. It was noted that a large number of functions contain complex switch constructs (this is partly true for functional programs), and it was decided to highlight this point. The function definition is split into several variants, and a pattern is established in place of the function arguments (this is reminiscent of method overloading). When a function call occurs, the compiler compares the arguments to all definitions on the fly and selects the most appropriate one. Usually the choice falls on the most specialized function definition. For example, int fib(int n) can be called when n is 1, but it won’t, because int fib(1) is a more specialized definition.

Pattern matching usually looks more complicated than in our example. For example, a complex Pattern matching system allows you to write the following code:

Int f(int n< 10) { ... } int f(int n) { ... }
When is pattern matching useful? The list of such cases is surprisingly long! Any time you use complex nested if constructs, pattern matching can do a better job with less code. Comes to mind good example with the WndProc function, which is implemented in every Win32 program (even if it is hidden from the programmer behind a high fence of abstractions). Typically pattern matching can even check the contents of collections. For example, if you pass an array to a function, then you can select all arrays whose first element is equal to 1 and whose third element is greater than 3.

Another advantage of Pattern matching is that if you make changes, you won't have to dig through one huge function. All you need to do is add (or change) some function definitions. Thus, we get rid of a whole layer of patterns from the famous book of the Gang of Four. The more complex and branchy the conditions, the more useful it will be to use Pattern matching. Once you start using them, you will wonder how you ever managed without them.

Closures

So far, we have discussed the features of FP in the context of “purely” functional languages ​​- languages ​​that are implementations of lambda calculus and do not contain features that contradict the formal Church system. However, many features of functional languages ​​are used outside of lambda calculus. Although the implementation of an axiomatic system is interesting from a programming point of view in terms of mathematical expressions, this may not always be applicable in practice. Many languages ​​prefer to use elements of functional languages ​​without adhering to a strict functional doctrine. Some such languages ​​(for example Common Lisp) do not require variables to be final - their values ​​can be changed. They don't even require functions to depend only on their arguments—functions are allowed to access state outside their scope. But at the same time they include such features as higher-order functions. Passing a function in a non-pure language is slightly different from the same operation within lambda calculus and requires the presence of interesting feature called: lexical closure. Let's take a look at the following example. Remember that in this case the variables are not final and the function can access variables outside its scope:

Function makePowerFn(int power) ( int powerFn(int base) ( return pow(base, power); ) return powerFn; ) Function square = makePowerFn(2); square(3); // returns 9
The make-power-fn function returns a function that takes one argument and raises it to a certain degree. What happens when we try to evaluate square(3) ? The variable power is out of scope of powerFn because makePowerFn has already completed and its stack has been destroyed. So how does square work? The language must somehow store the meaning of power in order for the square function to work. What if we create another cube function that raises a number to the third power? The language will have to store two power values ​​for each function created in make-power-fn. The phenomenon of storing these values ​​is called closure. A closure not only preserves the arguments of the top function. For example, a closure might look like this:

Function makeIncrementer() ( int n = 0; int increment() ( return ++n; ) ) Function inc1 = makeIncrementer(); Function inc2 = makeIncrementer(); inc1(); // returns 1; inc1(); // returns 2; inc1(); // returns 3; inc2(); // returns 1; inc2(); // returns 2; inc2(); // returns 3;
During execution, the values ​​of n are stored and counters have access to them. Moreover, each counter has its own copy of n, despite the fact that they should have disappeared after the makeIncrementer function ran. How does the compiler manage to compile this? What happens behind the scenes of closures? Luckily we have a magic pass.

Everything is done quite logically. At first glance, it is clear that local variables are no longer subject to scope rules and their lifetime is undefined. Obviously, they are no longer stored on the stack - they need to be kept on the heap. The closure is therefore made like the normal function we discussed earlier, except that it has additional link to surrounding variables:

Class some_function_t ( SymbolTable parentScope; // ... )
If a closure accesses a variable that is not in the local scope, then it takes the parent scope into account. That's all! Closure connects the functional world with the OOP world. Every time you create a class that stores some state and pass it somewhere, remember about closures. A closure is just an object that creates "attributes" on the fly, taking them out of scope so you don't have to do it yourself.

Now what?

This article only touches the tip of the Functional Programming iceberg. You can dig deeper and see something really big, and in our case, something good. In the future, I plan to write about category theory, monads, functional data structures, type systems in functional languages, functional multithreading, functional databases, and many more things. If I can write (and study in the process) about even half of these topics, my life will not be in vain. In the meantime, Google- your faithful friend. The set X is called the domain of definition of the function P and the set Y is the domain of values ​​of the function P. The value x in P(x), which represents any element from the set X, is called an independent variable, and the value y from the set Y, defined by the equation y = P(x ) is called the dependent variable. Sometimes, if the function f is not defined for all x in X, they speak of a partially defined function, otherwise they mean full definition.

Very important property The mathematical definition of a function is that, given an argument x, it always determines the same value because it has no side effects. Side effects in programming languages ​​are associated with variables that model memory cells. Mathematical function defines a value, rather than specifies a sequence of operations on numbers stored in memory cells to calculate some value. There are no variables in the sense given to them in imperative programming languages, so there can be no side effects.

In a programming language based on the mathematical concept of function, programs, procedures, and functions can be represented. In the case of a program, x corresponds to the input data, and y corresponds to the results. In the case of a procedure or function, x characterizes the parameters, and y characterizes the return values. In either case, x can be referred to as "inputs" and y as "outputs". Therefore, in the functional approach to programming, there is no distinction between program, procedure and function. On the other hand, input and output quantities are always different.

Programming languages ​​have a very clear separation between defining a function and using a function. A function definition describes how a quantity can be calculated based on formal parameters. A function application is to call a specific function using actual parameters. Note that in mathematics the difference between a parameter and a variable is not always obvious. Very often the term “parameter” is replaced by the term “independent variable”. For example, in mathematics you can write down the definition of the squaring function: square(x) = x * x

Let's assume that x satisfies square(x) + 2 . . .

The main difference between imperative and functional programming is the interpretation of the concept of a variable. In mathematics, variables are represented as actual values, and in imperative programming languages, variables refer to memory locations where their values ​​are stored. Assignments allow you to change the values ​​in these memory areas. In contrast, in mathematics there are no concepts of "storage area" and "assignment", so an operator such as x = x + 1

doesn't make sense. Functional programming is based on a mathematical approach to the concept of "variable". In functional programming, variables are associated with values, but have nothing to do with memory locations. After binding a variable to a value, the value of the variable changes

Functional programming language

The main properties of functional programming languages ​​are usually considered [ by whom?] the following:

  • brevity and simplicity;

Programs in functional languages ​​are usually much shorter and simpler than the same programs in imperative languages.
Example (Hoare quicksort in an abstract functional language):

QuickSort() =
quickSort() = quickSort(n | n t, n<= h) + [h] + quickSort (n | n t, n >h)

  • strong typing;

In functional languages, most errors can be corrected at the compilation stage, so the debugging stage and overall program development time are reduced. In addition, strong typing allows the compiler to generate more efficient code and thus speed up program execution.

  • modularity;

The modularity mechanism allows you to divide programs into several relatively independent parts (modules) with clearly defined connections between them. This simplifies the process of designing and subsequent support of large software systems. Support for modularity is not a property of functional programming languages, but is supported by most such languages.

  • functions - objects of calculation;

In functional languages ​​(as well as in programming languages ​​and mathematics in general), functions can be passed to other functions as an argument or returned as a result. Functions that take function arguments are called higher-order functions or functionals.

In pure functional programming there is no assignment operator, objects cannot be modified or destroyed, new ones can only be created by decomposing and synthesizing existing ones. The garbage collector built into the language will take care of unnecessary objects. Thanks to this, in pure functional languages, all functions are free from side effects.

  • deferred (lazy) calculations.

In traditional programming languages ​​(such as C++), calling a function causes all arguments to be evaluated. This method of calling a function is called call-by-value. If any argument was not used in the function, then the result of the calculation is lost, therefore, the calculation was done in vain. In a sense, the opposite of call-by-value is call-by-need (lazy evaluation). In this case, the argument is evaluated only if it is needed to calculate the result.

Some functional programming languages

  • Gofel
  • Harlequin's MLWorks
  • Classification of functional languages

    As an example pure A functional language can be given by Haskell. However, most functional languages ​​are hybrid and contain properties of both functional and imperative languages. Vivid examples are the Scala and Nemerle languages. They organically combine the characteristics of both object-oriented and functional languages. Tail recursion and its optimization are implemented; the function is a full-fledged object, that is, it can be stored in a variable, passed as an argument to another function, or returned from a function.

    Functional languages ​​are also divided into strict And lax. Non-strict languages ​​include those that support lazy evaluation (F#), that is, function arguments are evaluated only when they are actually needed when evaluating the function. A striking example of a non-strict language is Haskell. An example of a strict language is Standard ML.

    Some functional languages ​​are implemented on top of platform-forming virtual machines (JVM, .NET), that is, applications in these languages ​​can run in a runtime environment (JRE, CLR) and use built-in classes. These include Scala, Clojure (JVM), F#, Nemerle, SML.NET (.NET).

    Links

    • http://fprog.ru/ - Magazine “The Practice of Functional Programming”
    • http://www.intuit.ru/department/pl/funcpl/1/ - Fundamentals of functional programming. L. V. Gorodnyaya
    • http://roman-dushkin.narod.ru/fp.html - A course of lectures on functional programming, given at MEPhI since 2001;
    • http://alexott.net/ru/fp/books/ - Review of literature on functional programming. Books in both Russian and English are considered.

    Wikimedia Foundation. 2010.

    See what “Functional programming language” is in other dictionaries:

      Lisp programming language- Functional programming language. Topics information Technology in general EN Lisp... Technical Translator's Guide

      A universal high-level programming language. Lisp language: refers to declarative languages ​​of the functional type; designed for processing character data presented in the form of lists. The basis of the language are functions and recursive... ... Financial Dictionary

      This term has other meanings, see Alice. Alice Semantics: functional Execution type: compilation to bytecode for a virtual machine Appeared in: 2002 ... Wikipedia

      This term has other meanings, see Scala. Scala Language class: Multi-paradigm: func... Wikipedia

      Oz Semantics: functional, procedural, declarative, object-oriented, constrained computing, H models, parallel computing Execution type: compiled Appeared in: 1991 Author(s): Gert Smolka his students Release ... Wikipedia

      AWL (Alternative Web Language) Language class: multi-paradigm: functional, procedural, object-oriented Execution type: interpreted Appeared in: 2005 Data typing: dynamic ... Wikipedia

      This term has other meanings, see Leda (meanings). Leda is a multi-paradigm programming language designed by Timothy Budd. The Leda language was originally created to combine imperative programming, objective... ... Wikipedia

      Erlang File:Erlang logo.png Semantics: multi-paradigm: competitive, functional programming Appeared in: 1987 Author(s): Data typing: strict, dynamic Main implementations: E ... Wikipedia

      In functional programming languages, the main building block is mathematical concept functions. There are differences in the understanding of a function in mathematics and a function in programming, as a result of which C cannot be classified as similar... ... Wikipedia

      Python was conceived in the 1980s and its creation began in December 1989 by Guido van Rossum as part of the Center for Mathematics and Computer Science in the Netherlands. Python language was conceived as a descendant of the ABC programming language, capable of processing... ... Wikipedia