What is functional programming. An approach to computing arguments. Syntax in two minutes

Functional programming combines different approaches to defining computation processes based on fairly strict abstract concepts and methods of symbolic data processing.

A feature of functional programming languages ​​is that program texts in functional programming languages ​​describe “how to solve a problem”, but do not prescribe a sequence of actions for solving it. The main properties of functional programming languages: brevity, simplicity, strong typing, modularity, the presence of lazy (lazy) calculations.

Functional programming languages ​​include: Lisp, Miranda, Gofel, ML, Standard ML, Objective CAML, F#, Scala, Pythagoras, etc.

Procedural programming languages

A procedural programming language allows the programmer to define each step in the process of solving a problem. The peculiarity of such programming languages ​​is that tasks are divided into steps and solved step by step. Using a procedural language, the programmer defines language constructs to carry out a sequence of algorithmic steps.

Procedural programming languages: Ada, Basic, C, COBOL, Pascal, PL/1, Rapier, etc.

Stack programming languages

A stack programming language is a programming language that uses the machine model of the stack to pass parameters. Stack-based programming languages: Forth, PostScript, Java, C#, etc. When using the stack as the main channel for passing parameters between words, language elements naturally form phrases (sequential chaining). This property brings these languages ​​closer to natural languages.

Aspect-oriented programming languages ​​5) Declarative programming languages ​​6) Dynamic programming languages ​​7) Educational languages programming 8) Interface description languages ​​9) Prototype programming languages ​​10) Object-oriented programming languages ​​11) Logical programming languages ​​12) Scripting programming languages ​​13) Esoteric programming languages


Standardization of programming languages. Programming Paradigm

The concept of a programming language is inextricably linked to its implementation. To ensure that compilation of the same program by different compilers always gives the same result, programming language standards are being developed. Standardization organizations: American National Standards Institute ANSI, Institute of Electrical and Electronics Engineers IEEE, ISO International Standards Organization.



When a language is created, a private standard is released, determined by the language developers. If a language becomes widespread, then over time different versions compilers that do not strictly follow a private standard. In most cases, there is an expansion of the initially fixed capabilities of the language. A consensus standard is being developed to bring the most popular implementations of the language into line with each other. Very important factor The standardization of a programming language is the timeliness of the appearance of the standard - before the wide distribution of the language and the creation of many incompatible implementations. In the process of language development, new standards may appear, reflecting modern innovations.

Programming Paradigms

Paradigm- a set of theories, standards and methods that together represent a way of organizing scientific knowledge - in other words, a way of seeing the world. By analogy, it is generally accepted that a paradigm in programming is a way of conceptualizing that defines how calculations should be carried out and how the work performed by a computer should be structured and organized.

There are several basic programming paradigms, the most important of which are this moment time are the paradigms of directive, object-oriented and functional-logical programming. To support programming in accordance with a particular paradigm, special algorithmic languages ​​have been developed.

C and Pascal are examples of languages ​​designed for prescriptive programming, where the program developer uses a process-oriented model, that is, tries to create code that operates on data in the proper way. In this approach, the active principle is considered to be a program (code) that must perform all the necessary actions on passive data to achieve the desired result.


Programming technology as a process of developing software products created as an inextricable whole in the form of well-tested programs and teaching materials, describing their purpose and use.

Programming- process of creation computer programs. In more in a broad sense: range of activities associated with the creation and maintenance of work. state of programs - computer software.

Programming technology- a set of methods and tools used in the software development process.

Programming technology is a set of technological instructions, including:

· indication of the sequence of technological operations;

· listing the conditions under which this or that operation is performed;

· descriptions of the operations themselves, where for each operation the initial data, results, as well as instructions, regulations, standards, criteria, etc. are defined.

Modern programming technology - component approach, which involves building software from individual components - physically separate pieces of software that interact with each other through standardized binary interfaces. Currently quality criteria a software product is considered to be: − functionality; − reliability;− ease of use;− efficiency(the ratio of the level of services provided by a software product to a user under given conditions to the volume of resources used); − maintainability(characteristics of a software product that allow minimizing efforts to make changes to eliminate errors in it and to modify it in accordance with the changing needs of users); mobility(the ability of a software system to be transferred from one environment to another, in particular, from one computer to another).

An important step creation of a software product. testing and debugging.

Debugging− this is an activity aimed at detecting and correcting errors in a software product using the processes of executing its programs.

Testing− this is the process of executing its programs on a certain set of data, for which the result of application is known in advance or the rules of behavior of these programs are known.

The following PS testing methods exist:

1) Static testing - manual testing of the program at the table.

2) Deterministic testing - with various combinations of source data.

3) Stochastic – ref. The data is selected randomly, and the output is determined by the quality of the results or an approximate estimate.


Programming styles.

A programming style is a set of programming techniques or methods that programmers use to produce programs that are correct, efficient, easy to use, and easy to read.

There are several programming styles:

  1. Procedural programming is programming in which a program is a sequence of statements. Used in high-level languages ​​Basic, Fortran, etc.
  2. Functional programming is programming in which a program is a sequence of function calls. Used in Lisp and other languages.
  3. Logic programming – this is programming in which the program is a set of determination of relationships between objects. Used in Prolog and other languages.

Object-oriented programming– this is programming in which the basis of the program is an object that is a collection of data and rules for their transformation. Used in Turbo-Pascal languages, C++, etc.

Functional programming is a branch of discrete mathematics and a programming paradigm in which the computation process is interpreted as calculating the values ​​of functions in the mathematical sense of the latter (as opposed to functions as subroutines in procedural programming).

It is contrasted with the imperative programming paradigm, which describes the computation process as a sequential change of states (in a meaning similar to that in automata theory). If necessary, in functional programming the entire set of sequential states of a computational process is represented explicitly, for example, as a list.

Functional programming involves calculating the results of functions from input data and the results of other functions, and does not involve explicitly storing program state. Accordingly, it does not imply the changeability of this state (unlike the imperative one, where one of the basic concepts is a variable that stores its value and allows it to be changed as the algorithm executes).

In practice, the difference between a mathematical function and the concept of “function” in imperative programming is that imperative functions can rely not only on arguments, but also on the state of variables external to the function, and also have side effects and change the state of external variables. Thus, in imperative programming, when calling the same function with the same parameters, but at different stages of the algorithm, you can get different output data due to the influence of the state of the variables on the function. And in a functional language, when calling a function with the same arguments, we always get the same result: the output depends only on the input. This allows functional language runtimes to cache the results of functions and call them in an order not determined by the algorithm. (see Pure Functions below)



λ-calculus is the basis for functional programming, many functional languages ​​can be considered as a “superstructure” on top of it.

Strengths

[edit]Increasing code reliability

The attractive side of stateless computing is the increased reliability of code due to clear structuring and the absence of the need to track side effects. Any function works only with local data and always works with it the same way, regardless of where, how and under what circumstances it is called. Inability to mutate data when using it in different places x of the program eliminates the occurrence of hard-to-detect errors (such as, for example, accidentally assigning the wrong value to a global variable in an imperative program).

[edit]Ease of organizing unit testing

Since a function in functional programming cannot produce side effects, objects cannot be changed either inside the scope or outside (unlike imperative programs, where one function can set some external variable that is read by a second function). The only effect of evaluating a function is the result it returns, and the only factor that influences the result is the values ​​of the arguments.

Thus, it is possible to test each function in a program by simply evaluating it from different sets of argument values. In this case, you don’t have to worry about calling functions in in the right order, nor about correct formation external state. If any function in a program passes unit tests, then you can be confident in the quality of the entire program. In imperative programs, checking the return value of a function is not enough: the function can modify external state, which also needs to be checked, which does not need to be done in functional programs.

[edit]Optimization options during compilation

A traditionally mentioned positive feature of functional programming is that it allows you to describe a program in the so-called “declarative” form, when the rigid sequence of performing many operations necessary to calculate the result is not explicitly specified, but is formed automatically in the process of calculating functions. [source] not specified 1064 days] This circumstance, as well as the absence of states, makes it possible to apply fairly complex methods of automatic optimization to functional programs.

[edit]Concurrency capabilities

Another advantage of functional programs is that they provide the broadest opportunities for automatic parallelization of calculations. Since the absence of side effects is guaranteed, in any function call it is always possible to evaluate two different parameters in parallel - the order in which they are evaluated cannot affect the result of the call.

[edit]Disadvantages

The disadvantages of functional programming stem from the same features. The absence of assignments and their replacement with the generation of new data leads to the need for constant allocation and automatic release of memory, therefore, in the execution system of a functional program, a highly efficient garbage collector becomes a mandatory component. A loose computation model results in unpredictable ordering of function calls, which creates problems in I/O where the order of operations is important. Also, obviously, input functions in their natural form (such as getchar from the C standard library) are not clean because they can return different values ​​for the same arguments, and this requires some trickery to overcome.

To overcome the shortcomings of functional programs, already the first functional programming languages ​​included not only purely functional tools, but also imperative programming mechanisms (assignment, loop, “implicit PROGN” were already in LISP). Using such tools allows you to solve some practical problems, but means moving away from the ideas (and advantages) of functional programming and writing imperative programs in functional languages. [source not specified 1064 days] In pure functional languages, these problems are solved by other means, for example, in the Haskell language I/O is implemented using monads, a non-trivial concept borrowed from category theory.

Tail recursion is a special case of recursion in which a function's recursive call to itself is its last operation. This type of recursion is notable in that it can be easily replaced by iteration, which is implemented in many optimizing compilers. When a function is called, the computer must remember the location from which the function was called (return address) in order to return after its completion and continue executing the program. Typically the return address is stored on the stack. Sometimes the last action of a function after all other operations have completed is simply calling the function, perhaps itself, and returning a result. In this case, there is no need to remember the return address; the newly called function will return the result directly to the place where the original function was called. Tail recursion is often used in programs in functional programming languages. It is natural to express many calculations in such languages ​​in the form of recursive functions, and the ability of the translator to automatically replace tail recursion with iteration means that in terms of computational efficiency it is equal to equivalent code written in iterative form.

The creators of the functional language Scheme, one of the dialects of Lisp, appreciated the importance of tail recursion so much that in the language specification they prescribed each translator of this language to mandatory implement tail recursion optimization.

Higher order function- a function that takes other functions as arguments or returns another function as a result. Sometimes functions higher order are called functionals, although this is not entirely true; a more precise equivalent is an operator.

In functional programming languages, all functions are higher order functions.

String reverse(String arg) ( if(arg.length == 0) ( return arg; ) else ( return reverse(arg.substring(1, arg.length)) + arg.substring(0, 1); ) )
This function is quite slow because it calls itself repeatedly. There is a possibility of a memory leak here, since temporary objects are created many times. But this is a functional style. You may find it strange how people can program like this. Well, I was just about to tell you.

Benefits of Functional Programming

You're probably thinking that I can't make a case to justify the monstrous feature above. When I first started learning functional programming, I thought so too. I was wrong. There are very good arguments for this style. Some of them are subjective. For example, programmers claim that functional programs are easier to understand. I will not make such arguments, because everyone knows that ease of understanding is a very subjective thing. Fortunately for me, there are still a lot of objective arguments.

Unit testing

Since each symbol in FP is immutable, functions have no side effects. You cannot change the values ​​of variables, and a function cannot change a value outside its scope and thereby affect other functions (as can happen with class fields or global variables). This means that the only result of the function's execution is the return value. And the only thing that can affect the return value is the arguments passed to the function.

Here it is, the blue dream of unit testers. You can test every function in a program using only the required arguments. There is no need to call functions in the correct order or recreate the correct external state. All you need to do is pass arguments that match the edge cases. If all the functions in your program pass unit tests, then you can be much more confident in the quality of your software than in the case of imperative programming languages. In Java or C++, checking the return value is not enough - the function can change the external state, which is also subject to checking. There is no such problem in FP.

Debugging

If a functional program does not behave as you expect, then debugging is a piece of cake. You can always reproduce the problem because the error in the function does not depend on foreign code which was executed previously. In an imperative program, the error appears only for a while. You will have to go through a number of non-bug steps due to the fact that the function's operation depends on the external state and side effects of other functions. In FP, the situation is much simpler - if the return value is incorrect, then it will always be incorrect, no matter what pieces of code were executed before.

Once you reproduce the error, find its source - trivial task. It's even nice. As soon as you stop the program, you will have the entire call stack in front of you. You can view the arguments to each function call, just like in an imperative language. With the difference that in an imperative program this is not enough, because functions depend on the values ​​of fields, global variables and states of other classes. A function in FP depends only on its arguments, and this information is right in front of your eyes! Even more, in an imperative program, checking the return value is not enough to tell whether a piece of code behaves correctly. You'll have to hunt down dozens of objects outside the function to make sure everything works correctly. In functional programming, all you have to do is look at the return value!

As you walk through the stack, you pay attention to the arguments passed and the return values. Once the return value deviates from the norm, you drill into the function and move on. This is repeated several times until you find the source of the error!

Multithreading

The functional program is immediately ready for parallelization without any changes. You don't have to worry about deadlocks or race conditions because you don't need locks! Not a single piece of data in a functional program is changed twice by the same thread or by different ones. This means that you can easily add threads to your program without ever having to worry about the problems inherent in imperative languages.

If this is the case, then why are functional programming languages ​​so rarely used in multithreaded applications? Actually more often than you think. Ericsson has developed a functional language called Erlang for use on fault-tolerant and scalable telecommunications switches. Many noted the advantages of Erlang and began to use it. We're talking about telecommunications and traffic control systems, which are not nearly as easily scalable as typical systems developed on Wall Street. In fact, systems written in Erlang are not as scalable and reliable as Java systems. Erlang systems are simply super reliable.

The story of multithreading does not end there. If you're writing an essentially single-threaded application, the compiler can still optimize the functional program to use multiple CPUs. Let's look at the next piece of code.


A functional language compiler can analyze the code, classify the functions that produce lines s1 and s2 as time-consuming functions, and run them in parallel. This is impossible to do in an imperative language, because each function can change external state and the code immediately following the call can depend on it. In FP, automatic analysis of functions and search for suitable candidates for parallelization is a trivial task, like automatic inline! In this sense, the functional programming style meets the requirements tomorrow. Hardware developers can no longer make the CPU work faster. Instead, they are increasing the number of cores and claiming a fourfold increase in multi-threaded computing speed. Of course, they forget to say in good time that your new processor will show an increase only in programs developed taking into account parallelization. There are very few of these among imperative software. But 100% of functional programs are ready for multithreading out of the box.

Hot Deployment

In the old days to install Windows updates I had to restart the computer. Many times. After installing a new version of the media player. There have been significant changes in Windows XP, but the situation is still far from ideal (today I launched Windows Update at work and now the annoying reminder won’t leave me alone until I reboot). IN Unix systems the update model was better. To install updates, I had to stop some components, but not the entire OS. Although the situation looks better, it is still not acceptable for a large class of server applications. Telecommunications systems must be turned on 100% of the time, because if an update prevents a person from calling an ambulance, lives can be lost. Wall Street firms also don't want to shut down servers over the weekend to install updates.

Ideally, you need to update all the necessary sections of the code without stopping the system in principle. In the imperative world this is impossible [transl. in Smalltalk it is very possible]. Imagine unloading a Java class on the fly and reloading a new version. If we did this, then all instances of the class would become non-working, because the state that they stored would be lost. We would have to write tricky code for version control. We would have to serialize all created instances of the class, then destroy them, create instances of a new class, try to load the serialized data in the hope that the migration will go smoothly and the new instances will be valid. And besides, the migration code must be written manually each time. And the migration code must preserve links between objects. In theory it’s all right, but in practice it will never work.

In a functional program, all state is stored on the stack as function arguments. This makes hot deployment much easier! Essentially, all you need to do is calculate the difference between the code on the production server and the new version, and install the changes in the code. The rest will be done automatically by the language tools! If you think this is science fiction, think twice. Engineers who work with Erlang have been updating their systems for years without stopping their work.

Machine Assisted Proofs and Optimizations

Another interesting property of functional programming languages ​​is that they can be learned from a mathematical point of view. Since a functional language is an implementation of a formal system, all mathematical operations used on paper can be applied to functional programs. The compiler, for example, can convert a piece of code into an equivalent, but more efficient piece, while mathematically justifying their equivalence. Relational databases data have been undergoing such optimizations for years. There is nothing stopping you from using similar techniques in regular programs.

Additionally, you can use mathematics to prove the correctness of sections of your programs. If you want, you can write tools that analyze your code and automatically create Unit tests for edge cases! This functionality is invaluable for rock solid systems. When developing pacemaker monitoring or air traffic management systems, such tools are essential. If your developments are not in the field of mission-critical applications, then the tools automatic check will still give you a gigantic advantage over your competitors.

Higher Order Functions

Remember, when I talked about the benefits of FP, I noted that “everything looks nice, but it’s useless if I have to write in a clumsy language in which everything is final.” This was a misconception. The use of final everywhere looks clumsy only in imperative programming languages ​​such as Java. Functional programming languages ​​operate with other kinds of abstractions, ones that make you forget that you ever liked changing variables. One such tool is higher order functions.

In FP, a function is not the same as a function in Java or C. It is a superset - they can do the same thing as Java functions and even more. Let's say we have a function in C:

Int add(int i, int j) ( return i + j; )
In FP this is not the same as a regular C function. Let's extend our Java compiler to support this notation. The compiler must turn the function declaration into the following Java code (remember that there is an implicit final everywhere):

Class add_function_t ( int add(int i, int j) ( return i + j; ) ) add_function_t add = new add_function_t();
The add symbol is not really a function. This is a small class with one method. Now we can pass add as an argument to other functions. We can write it into another symbol. We can create add_function_t instances at runtime and they will be garbage collected when they are no longer needed. Functions become basic objects, just like numbers and strings. Functions that operate on functions (take them as arguments) are called higher-order functions. Don't let this scare you. The concept of higher-order functions is almost the same as the concept of Java classes that operate on each other (we can pass classes to other classes). We can call them “higher-order classes,” but no one bothers with that because Java doesn't have a rigorous academic community behind it.

How and when should you use higher order functions? I'm glad you asked. You write your program as one big monolithic piece of code without worrying about class hierarchy. If you see that some piece of code is repeated in different places, you move it to separate function(fortunately, schools still teach how to do this). If you notice that some of the logic in your function should behave differently in some situations, then you create a higher order function. Confused? Here is a real example from my work.

Let's assume that we have a piece of Java code that receives a message, transforms it different ways and transfers it to another server.

Void handleMessage(Message msg) ( // ... msg.setClientCode("ABCD_123"); // ... sendMessage(msg); ) // ... )
Now imagine that the system has changed, and now you need to distribute messages between two servers instead of one. Everything remains unchanged except the client code - the second server wants to receive this code in a different format. How can we deal with this situation? We can check where the message should go and, depending on this, set correct code client. For example like this:

Class MessageHandler ( void handleMessage(Message msg) ( // ... if(msg.getDestination().equals("server1") ( msg.setClientCode("ABCD_123"); ) else ( msg.setClientCode("123_ABC") ; ) // ... sendMessage(msg ) // ... )
But this approach does not scale well. As new servers are added, the feature will grow linearly and making changes will become a nightmare. Object oriented approach consists of isolating a common superclass MessageHandler and subclassing the logic for defining the client code:

Abstract class MessageHandler ( void handleMessage(Message msg) ( // ... msg.setClientCode(getClientCode()); // ... sendMessage(msg); ) abstract String getClientCode(); // ... ) class MessageHandlerOne extends MessageHandler ( String getClientCode() ( return "ABCD_123"; ) ) class MessageHandlerTwo extends MessageHandler ( String getClientCode() ( return "123_ABCD"; ) )
Now for each server we can create an instance of the corresponding class. Adding new servers becomes more convenient. But there is a lot of text for such a small change. I had to create two new types just to add support for different client code! Now we’ll do the same in our language with support for higher-order functions:

Class MessageHandler ( void handleMessage(Message msg, Function getClientCode) ( // ... Message msg1 = msg.setClientCode(getClientCode()); // ... sendMessage(msg1); ) // ... ) String getClientCodeOne( ) ( return "ABCD_123"; ) String getClientCodeTwo() ( return "123_ABCD"; ) MessageHandler handler = new MessageHandler(); handler.handleMessage(someMsg, getClientCodeOne);
We did not create new types or complicate the class hierarchy. We simply passed the function as a parameter. We achieved the same effect as in the object-oriented counterpart, but with some advantages. We have not tied ourselves to any class hierarchy: we can pass any other functions to runtime and change them at any time, while maintaining a high level of modularity with less code. Essentially, the compiler created object-oriented glue for us! At the same time, all other advantages of FP are preserved. Of course, the abstractions offered by functional languages ​​do not end there. Higher order functions are just the beginning

Currying

Most people I meet have read the book Design Patterns by the Gang of Four. Any self-respecting programmer will say that the book is not tied to any specific language programming, and the patterns are applicable to software development in general. This is a noble statement. But unfortunately it is far from the truth.

Functional languages ​​are incredibly expressive. In a functional language, you won't need design patterns because the language is so high-level that you can easily start programming in concepts that eliminate all known programming patterns. One of these patterns is Adapter (how is it different from Facade? It looks like someone needed to stamp more pages to fulfill the terms of the contract). This pattern turns out to be unnecessary if the language supports currying.

The Adapter pattern is most often applied to the "standard" unit of abstraction in Java - the class. In functional languages, pattern is applied to functions. The pattern takes an interface and transforms it into another interface according to certain requirements. Here is an example of the Adapter pattern:

Int pow(int i, int j); int square(int i) ( return pow(i, 2); )
This code adapts the interface of a function that raises a number to an arbitrary power to the interface of a function that squares a number. In academic circles, this simple technique is called currying (after the logician Haskell Curry, who performed a series of mathematical tricks to formalize it all). Since functions are used everywhere as arguments in FP, currying is used very often to bring functions to the interface needed in one place or another. Since the interface of a function is its arguments, currying is used to reduce the number of arguments (as in the example above).

This tool is built into functional languages. You don't have to manually create a function that wraps the original. A functional language will do everything for you. As usual, let's expand our language by adding currying.

Square = int pow(int i, 2);
With this line we automatically create a squaring function with one argument. The new function will call the pow function, substituting 2 as the second argument. From a Java perspective, it would look like this:

Class square_function_t ( int square(int i) ( return pow(i, 2); ) ) square_function_t square = new square_function_t();
As you can see, we simply wrote a wrapper over the original function. In FP, currying is just a simple and convenient way to create wrappers. You focus on the task, and the compiler writes the necessary code for you! It's very simple, and happens every time you want to use the Adapter (wrapper) pattern.

Lazy evaluation

Lazy (or deferred) evaluation is interesting technique which becomes possible once you have mastered the functional philosophy. We have already seen the following piece of code when talking about multithreading:

String s1 = somewhatLongOperation1(); String s2 = somewhatLongOperation2(); String s3 = concatenate(s1, s2);
In imperative programming languages, the order of computation does not raise any questions. Since each function can affect or depend on external state, it is necessary to maintain a clear order of calls: first somewhatLongOperation1 , then somewhatLongOperation2 , and concatenate at the end. But not everything is so simple in functional languages.

As we saw earlier, somewhatLongOperation1 and somewhatLongOperation2 can be run simultaneously because the functions are guaranteed not to affect or depend on the global state. But what if we don't want to execute them simultaneously, should we call them sequentially? The answer is no. These calculations should only be run if any other function depends on s1 and s2 . We don't even need to execute them until we need them inside concatenate . If instead of concatenate we substitute a function that, depending on the condition, uses one of the two arguments, then the second argument may not even be calculated! Haskell is an example of a lazy evaluation language. Haskell doesn't guarantee any call order (at all!) because Haskell executes code as needed.

Lazy computing has a number of advantages as well as some disadvantages. In the next section we will discuss the advantages and I will explain how to live with the disadvantages.

Optimization

Lazy evaluation provides enormous potential for optimizations. A lazy compiler looks at code the same way a mathematician looks at algebraic expressions - it can undo things, cancel execution of certain sections of code, change the order of calls for greater efficiency, even arrange code in such a way as to reduce the number of errors, while ensuring the integrity of the program. Exactly this big advantage when describing a program using strict formal primitives, the code obeys mathematical laws and can be studied using mathematical methods.

Abstracting Control Structures

Lazy computing provides such a high level of abstraction that amazing things become possible. For example, imagine implementing the following control structure:

Unless(stock.isEuropean()) ( sendToSEC(stock); )
We want the sendToSEC function to be executed only if the stock is not European. How can you implement unless ? Without lazy evaluation, we would need a macro system, but in languages ​​like Haskell this is not necessary. We can declare unless as a function!

Void unless(boolean condition, List code) ( if(!condition) code; )
Note that code will not be executed if condition == true . In strict languages, this behavior cannot be repeated because the arguments will be evaluated before unless is called.

Infinite Data Structures

Lazy languages ​​allow you to create infinite data structures, which are much more difficult to create in strict languages. - just not in Python]. For example, imagine the Fibonacci sequence. Obviously, we cannot compute an infinite list in a finite time and still store it in memory. In strict languages ​​like Java, we would simply write a function that returns an arbitrary member of the sequence. In languages ​​like Haskell, we can abstract away and simply declare an infinite list of Fibonacci numbers. Since the language is lazy, only the necessary parts of the list that are actually used in the program will be calculated. This allows you to abstract from a large number of problems and look at them from a higher level (for example, you can use functions for processing lists on infinite sequences).

Flaws

Of course, free cheese only comes in a mousetrap. Lazy calculations come with a number of disadvantages. These are mainly shortcomings from laziness. In reality, direct order of calculations is very often needed. Take for example the following code:


In a lazy language, no one guarantees that the first line will be executed before the second! This means that we can't do I/O, we can't use native functions normally (after all, they need to be called in a certain order to account for their side effects), and we can't interact with the outside world! If we introduce a mechanism for ordering the execution of code, we will lose the advantage of mathematical rigor of the code (and then we will lose all the benefits of functional programming). Fortunately, all is not lost. Mathematicians got to work and came up with several techniques to ensure that the instructions were executed in the correct order without losing the functional spirit. We got the best of both worlds! Such techniques include continuations, monads, and uniqueness typing. In this article we will work with continuations, and leave monads and unambiguous typing until next time. It’s interesting that continuations are a very useful thing, which is used not only to specify a strict order of calculations. We'll talk about this too.

Sequels

Continuations play the same role in programming as The Da Vinci Code played in human history: the astonishing revelation of humanity's greatest mystery. Well, maybe not exactly like that, but they definitely tear off the covers, just as you learned to take the root of -1 back in the day.

When we looked at functions, we only learned half the truth, because we assumed that a function returns a value to the function that calls it. In this sense, continuation is a generalization of functions. A function does not have to return control to the place from which it was called, but can return to any place in the program. "Continue" is a parameter we can pass to a function to indicate a return point. It sounds much scarier than it actually is. Let's take a look at the following code:

Int i = add(5, 10); int j = square(i);
The add function returns the number 15, which is written to i at the location where the function was called. The value of i is then used when calling square. Note that a lazy compiler cannot change the order of calculations, because the second line depends on the result of the first. We can rewrite this code using a Continuation Passing Style (CPS), where add returns a value to the square function.

Int j = add(5, 10, square);
In this case, add receives an additional argument - a function that will be called after add finishes running. In both examples j will be equal to 225.

This is the first technique that allows you to specify the order in which two expressions are executed. Let's return to our I/O example.

System.out.println("Please enter your name: "); System.in.readLine();
These two lines are independent of each other, and the compiler is free to change their order as it wishes. But if we rewrite it in CPS, we will thereby add the necessary dependency, and the compiler will have to carry out calculations one after another!

System.out.println("Please enter your name: ", System.in.readLine);
In this case, println would have to call readLine , passing it its result, and return the result of readLine at the end. In this form, we can be sure that these functions will be called in turn, and that readLine will be called at all (after all, the compiler expects to receive the result last operation). In case of Java, println returns void. But if some abstract value (which could serve as an argument to readLine) was returned, that would solve our problem! Of course, building such chains of functions greatly impairs the readability of the code, but this can be dealt with. We can add syntactic features to our language that will allow us to write expressions as usual, and the compiler will automatically chain calculations. Now we can carry out calculations in any order without losing the advantages of FP (including the ability to study the program using mathematical methods)! If this is confusing, remember that functions are just instances of a class with a single member. Rewrite our example so that println and readLine are instances of classes, this will make it clearer to you.

But the benefits of sequels don’t end there. We can write the entire program using CPS, so that each function is called with an additional parameter, a continuation, to which the result is passed. In principle, any program can be translated into CPS if each function is treated as a special case of continuations. This conversion can be done automatically (in fact, many compilers do so).

As soon as we convert the program to CPS form, it becomes clear that each instruction has a continuation, a function to which the result will be passed, which in regular program would be a point of challenge. Let's take any instruction from the last example, for example add(5,10) . In a program written in CPS form, it is clear what will be the continuation - this is the function that add will call upon completion of work. But what will be the continuation in the case of a non-CPS program? We, of course, can convert the program to CPS, but is this necessary?

It turns out that this is not necessary. Take a close look at our CPS conversion. If you start writing a compiler for it, you'll find that the CPS version doesn't need a stack! Functions never return anything, in the traditional sense of the word “return”, they simply call another function, substituting the result of the calculation. There is no need to push arguments onto the stack before each call and then pop them back. We can simply store the arguments in some fixed memory location and use jump instead regular call. We don't need to store the original arguments, because they will never be needed again, because functions don't return anything!

Thus, CPS-style programs do not need a stack, but contain an additional argument, in the form of a function, to be called. Non-CPS style programs do not have an extra argument, but use a stack. What is stored on the stack? Just arguments and a pointer to the memory location where the function should return. Well, have you already guessed? The stack stores information about continuations! A pointer to a return point on the stack is the same as the function to be called in CPS programs! To find out what the continuation of add(5,10) is, just take the return point from the stack.

It was not difficult. A continuation and a pointer to a return point are really the same thing, only the continuation is specified explicitly, and therefore it may differ from the place where the function was called. If you remember that a continuation is a function, and a function in our language is compiled into an instance of a class, then you will understand that a pointer to a return point on the stack and a pointer to a continuation are actually the same thing, because our function (as an instance of a class ) is just a pointer. This means that at any point in time in your program you can request the current continuation (essentially information from the stack).

Okay, now we understand what the current continuation is. What does it mean? If we take the current continuation and save it somewhere, we thereby save the current state of the program - we freeze it. This is similar to OS hibernation mode. The continuation object stores the information necessary to resume program execution from the point at which the continuation object was requested. operating system does this to your programs all the time when it switches context between threads. The only difference is that everything is under the control of the OS. If you request a continuation object (in Scheme this is done by calling the call-with-current-continuation function), then you will receive an object with the current continuation - the stack (or in the case of CPS, the next call function). You can save this object to a variable (or even to disk). If you decide to "restart" the program with this continuation, then the state of your program is "transformed" to the state at the time the continuation object was taken. This is the same as switching to a suspended thread, or waking up the OS after hibernation. With the exception that you can do this many times in a row. After the OS wakes up, the hibernation information is destroyed. If this was not done, then it would be possible to restore the OS state from the same point. It's almost like traveling through time. With sequels you can afford it!

In what situations will continuations be useful? Usually if you are trying to emulate state in systems that are inherently devoid of state. An excellent use for continuations has been found in Web applications (for example, in the Seaside framework for the Smalltalk language). Microsoft's ASP.NET goes to great lengths to save state between requests to make your life easier. If C# supported continuations, the complexity of ASP.NET could be cut in half by simply saving the continuation and restoring it on the next request. From the point of view of a Web programmer, there would not be a single break - the program would continue its work with next line! Continuations are an incredibly useful abstraction for solving some problems. With more and more traditional fat clients moving to the Web, the importance of continuations will only grow over time.

Pattern matching

Pattern matching is not such a new or innovative idea. In fact, it has little to do with functional programming. The only reason it is often associated with FP is that for some time now functional languages ​​have pattern matching, but imperative languages ​​do not.

Let's start our introduction to Pattern matching with the following example. Here is a function for calculating Fibonacci numbers in Java:

Int fib(int n) ( if(n == 0) return 1; if(n == 1) return 1; return fib(n - 2) + fib(n - 1); )
And here is an example in a Java-like language with support for Pattern matching

Int fib(0) ( return 1; ) int fib(1) ( return 1; ) int fib(int n) ( return fib(n - 2) + fib(n - 1); )
What is the difference? The compiler implements the branching for us.

Just think, it's very important! It's really not that important. It was noted that a large number of functions contain complex switch constructs (this is partly true for functional programs), and it was decided to highlight this point. The function definition is split into several variants, and a pattern is established in place of the function arguments (this is reminiscent of method overloading). When a function call occurs, the compiler compares the arguments to all definitions on the fly and selects the most appropriate one. Usually the choice falls on the most specialized function definition. For example, int fib(int n) can be called when n is 1, but it won’t, because int fib(1) is a more specialized definition.

Pattern matching usually looks more complicated than in our example. For example a complex system Pattern matching allows you to write the following code:

Int f(int n< 10) { ... } int f(int n) { ... }
When is pattern matching useful? The list of such cases is surprisingly long! Any time you use complex nested if constructs, pattern matching can do a better job with less code. A good example that comes to mind is the WndProc function, which is implemented in every Win32 program (even if it is hidden from the programmer behind a high fence of abstractions). Typically pattern matching can even check the contents of collections. For example, if you pass an array to a function, then you can select all arrays whose first element is equal to 1 and whose third element is greater than 3.

Another advantage of Pattern matching is that if you make changes, you won't have to dig through one huge function. All you need to do is add (or change) some function definitions. Thus, we get rid of a whole layer of patterns from the famous book of the Gang of Four. The more complex and branchy the conditions, the more useful it will be to use Pattern matching. Once you start using them, you will wonder how you ever managed without them.

Closures

So far, we have discussed the features of FP in the context of “purely” functional languages ​​- languages ​​that are implementations of lambda calculus and do not contain features that contradict the formal Church system. However, many features of functional languages ​​are used outside of lambda calculus. Although the implementation of an axiomatic system is interesting from a programming point of view in terms of mathematical expressions, this may not always be applicable in practice. Many languages ​​prefer to use elements of functional languages ​​without adhering to a strict functional doctrine. Some such languages ​​(for example Common Lisp) do not require variables to be final - their values ​​can be changed. They don't even require functions to depend only on their arguments—functions are allowed to access state outside their scope. But at the same time they include such features as higher-order functions. Function passing in a non-pure language is a little different from the equivalent operation within lambda calculus and requires an interesting feature called lexical closure. Let's take a look at the following example. Remember that in in this case variables are not final and a function can access variables outside its scope:

Function makePowerFn(int power) ( int powerFn(int base) ( return pow(base, power); ) return powerFn; ) Function square = makePowerFn(2); square(3); // returns 9
The make-power-fn function returns a function that takes one argument and raises it to a certain power. What happens when we try to evaluate square(3) ? The variable power is out of scope of powerFn because makePowerFn has already completed and its stack has been destroyed. So how does square work? The language must somehow store the meaning of power in order for the square function to work. What if we create another cube function that raises a number to the third power? The language will have to store two power values ​​for each function created in make-power-fn. The phenomenon of storing these values ​​is called closure. A closure not only preserves the arguments of the top function. For example, a closure might look like this:

Function makeIncrementer() ( int n = 0; int increment() ( return ++n; ) ) Function inc1 = makeIncrementer(); Function inc2 = makeIncrementer(); inc1(); // returns 1; inc1(); // returns 2; inc1(); // returns 3; inc2(); // returns 1; inc2(); // returns 2; inc2(); // returns 3;
During execution, the values ​​of n are stored and counters have access to them. Moreover, each counter has its own copy of n, despite the fact that they should have disappeared after the makeIncrementer function ran. How does the compiler manage to compile this? What happens behind the scenes of closures? Luckily we have a magic pass.

Everything is done quite logically. At first glance, it is clear that local variables are no longer subject to scope rules and their lifetime is undefined. Obviously, they are no longer stored on the stack - they need to be kept in the heap. The closure is therefore made like the normal function we discussed earlier, except that it has an additional reference to the surrounding variables:

Class some_function_t ( SymbolTable parentScope; // ... )
If a closure accesses a variable that is not in the local scope, then it takes the parent scope into account. That's all! Closure connects the functional world with the OOP world. Every time you create a class that stores some state and pass it somewhere, remember about closures. A closure is just an object that creates "attributes" on the fly, taking them out of scope so you don't have to do it yourself.

Now what?

This article only touches the tip of the Functional Programming iceberg. You can dig deeper and see something really big, and in our case, something good. In the future, I plan to write about category theory, monads, functional data structures, type systems in functional languages, functional multithreading, functional databases, and many more things. If I can write (and study in the process) about even half of these topics, my life will not be in vain. In the meantime, Google- your faithful friend.

I've always wanted to write a series of articles on functional programming for this magazine, and I'm very glad that I finally got the opportunity. Even though my series on data analysis is still far from finished :). I will not announce the content of the entire series, I will only say that today we will talk about different programming languages ​​that support the functional style and corresponding programming techniques.

Programming languages ​​that not everyone knows about

I started programming as a child, and by the age of twenty-five it seemed to me that I knew and understood everything. Object Oriented Programming became part of my brain, every imaginable book on industrial programming was read. But I still had the feeling that I had missed something, something very subtle and incredibly important. The fact is that, like many in the nineties, at school I was taught to program in Pascal (oh yes, glory to Turbo Pascal 5.5! - Ed.), then C and C++. At the university, Fortran and then Java as the main tool at work. I knew Python and several other languages, but it was all wrong. But I did not have a serious education in the field of Computer Science. One day during a flight across the Atlantic, I couldn't sleep and I wanted to read something. Somehow, magically, I had a book about the Haskell programming language at my fingertips. It seems to me that it was then that I understood the true meaning of the expression “beauty requires sacrifice.”

Now, when people ask me how I learned Haskell, I say this: on an airplane. This episode changed my attitude towards programming in general. Of course, after the first acquaintance, many things seemed not entirely clear to me. I had to strain myself and study the issue more carefully. And you know, ten years have passed, many functional elements became part of industrial languages, lambda functions already exists even in Java, type inference- in C++, pattern matching- in Scala. Many people think that this is some kind of breakthrough. And in this series of articles I will tell you about functional programming techniques using different languages and their features.

Internet users often compile all sorts of lists and tops for the amusement of the public. For example, “a list of books you should read before you turn thirty.” If I were given the task of making a list of books on programming that you should read before you are any older, then the first place would certainly go to the book by Abelson and Sussman "Structure and interpretation of computer programs". Sometimes it even seems to me that the compiler or interpreter any language should stop anyone who has not read this book.

Therefore, if there is a language with which you need to start learning functional programming, it is Lisp. In general, this is a whole family of languages, which includes a now quite popular language for the JVM called Clojure. But it is not particularly suitable as a first functional language. For this it is better to use the language Scheme, which was developed at MIT and until the mid-2000s served as the main language for teaching programming. Although the introductory course with the same title as the book mentioned has now been replaced by a course on Python, it has still not lost its relevance.

I will try to briefly talk about the Scheme language and in general about the idea behind the languages ​​of this group. Despite the fact that Lisp is very old (of all high-level languages, only Fortran is older), it was in it that many programming methods used today first became available. In what follows, I will use the name Lisp, meaning a specific implementation - Scheme.

Syntax in two minutes

The syntax in Lisp is, hmm, a little controversial. The fact is that the idea underlying the syntax is extremely simple and is built on the basis of the so-called S-expressions. This is a prefix notation in which the familiar expression 2 + 3 is written as (+ 2 3) . This may seem strange, but in practice it gives some additional features. By the way, (+ 2 10 (* 3.14 2)) also works :). Thus, the entire program is a collection of lists that use prefix notation. In the case of the Lisp language, the program itself and the abstract syntax tree - “if you know what I mean” 😉 - are essentially no different. Such a record makes parsing Lisp programs are very simple.
Since we are talking about a programming language, we should talk about how to define functions in this language.

Here we need to make a small digression. There is one subtlety, the significance of which is underestimated in modern literature. It is still necessary to separate a function in the mathematical sense and a function as we understand it in functional programming. The fact is that in mathematics, functions are declarative objects, and in programming they are used to organize the computation process, that is, in a sense, they rather represent imperative knowledge, knowledge that answers the question “how?” That is why Abelson and Sussman in their book very carefully separate this and call functions in programming procedures. This is not accepted in modern functional programming literature. But I still strongly recommend separating these two meanings of the word “function” at least in your head.

The easiest way to define a function is to write the following code. Let's start with something indecently simple:

(define (sq-roots a b c) (let ((D (- (* b b) (* 4 a c)))) (if (< D 0) (list) (let ((sqrtD (sqrt D))) (let ((x1 (/ (- (- b) sqrtD) (* 2.0 a))) (x2 (/ (+ (- b) sqrtD) (* 2.0 a)))) (list x1 x2))))))

Yes, it's exactly what you thought - solving a quadratic equation in Scheme. But this is more than enough to see all the features of the syntax. Here sq-roots is the name of the function from three formal parameters.

At first glance, the let construct, which is used to define local variables, has too many parentheses. But this is not true, we simply first define a list of variables, and then an expression in which these variables are used. Here (list) is an empty list which we return when there are no roots and (list x1 x2) is a list of two values.

Now about expressions. In our sq-roots function we used an if construct. This is where functional programming comes in.

The point is that, unlike imperative languages ​​such as C, in functional languages ​​if is an expression, not a statement. In practice, this means that it cannot have an else branch. Because an expression must always have a meaning.

You can't talk about syntax without talking about syntactic sugar. In programming languages, syntactic sugar refers to constructs that are not necessary, but only make the code easier to read and reuse. To begin with, let's give a classic example from the C language. Many people know that arrays are not a necessary means of expression, since there are pointers. Yes, indeed, arrays are implemented through pointers, and a[i] for the C language is the same as *(a + i) . This example generally quite unusual, it has a funny effect associated with it: since the addition operation remains commutative in the case of pointers, the last expression is the same as *(i + a) , and this can be obtained by removing the syntactic sugar from the expression i [a] ! The operation of removing syntactic sugar in English language called a special word desugaring.

Returning to the Scheme language, we should bring important example syntactic sugar. To define variables, as in the case of functions, use keyword(in Lisp and Scheme this is called special form) define . For example, (define pi 3.14159) defines the variable pi. Generally speaking, you can define functions in exactly the same way:

(define square (lambda (x) (* x x)))

it's the same as

(define (square x) (* x x))

The last line looks a little easier to read compared to the version that uses a lambda expression. However, it is clear that it is enough to have the first option, and the second is optional. Why is the first one more important? Because one of the most basic properties functional languages ​​- that functions in them are first-class objects. The latter means that functions can be passed as an argument and returned as a value.

If you look at let from the point of view of a lambda expression, you can easily notice the following correspondence:

(let ((x 5) (y 2)) (* x y)) (apply (lambda (x y) (* x y)) (list 5 2))

Functional programming

There are functional languages clean And unclean. Pure functional languages ​​are relatively rare; they include primarily Haskell And Clean. Pure languages ​​have no side effects. In practice, this means no assignment or I/O in the way we are used to. This creates a number of difficulties, although in the languages ​​already mentioned this is solved quite cleverly, and in these languages ​​they write code with big amount I/O Languages ​​like Lisp, OCaml or Scala allow functions with side effects, and in this sense these languages ​​are often more practical.

Our task is to learn the basic techniques of functional programming in Scheme. Therefore, we will write purely functional code, without using a generator random numbers, I/O and set! , which will allow you to change the values ​​of variables. You can read about all this in the book SICP. Now let's focus on the most important for us.

The first thing that confuses a beginner about functional programming is the lack of loops. But what should we do? Many of us are taught that recursion is bad. This is argued by the fact that recursion in conventional programming languages ​​is usually implemented inefficiently. The point is that in general case One should distinguish between recursion as a technical technique, that is, calling a function from itself, and recursion as a process. Functional languages ​​support optimization of tail recursion or, as is sometimes called, accumulator recursion. This can be illustrated with a simple example.

Let us have two functions - succ and prev. The first returns a number 1 greater than the argument, and the second returns 1 less. Now let’s try to define the addition operation in two ways:

(define (add x y) (if (eq? y 0) x (add (succ x) (prev y)))) (define (add-1 x y) (if (eq? y 0) x (succ (add- 1 x (prev y)))))

What is the difference between the first and second case? The fact is that if we consider the calculation method for the first case step by step, we can see the following:

(add 3 4) => (add 4 3) => (add 5 2) => (add 6 1) => (add 7 0) => 7

In the second case we will have something like this:

(add-1 3 4) => (succ (add-1 3 3)) => (succ (succ (add-1 3 2))) => (succ (succ (succ (add-1 3 1)) )) => (succ (succ (succ (succ (add-1 3 0))))) => (succ (succ (succ (succ 3)))) => (succ (succ (succ 4))) => (succ (succ 5)) => (succ 6) => 7

Despite the fact that in both cases the result is the same, the calculation process is radically different. In the first case, the amount of memory used does not change, but in the second it grows linearly. The first process is iterative, and second - recursive. Yes, for writing effective programs In functional languages, you need to use tail recursion to avoid stack overflow.

Lists

One of the most important elements of functional programming, along with recursion, is lists. They provide the basis for complex data structures. As in other functional languages, lists are simply linked in a head-to-tail fashion. The cons function is used to create a list, and the car and cdr functions are used to access the head and tail of the list, respectively. So, the list (list 1 2 3) is nothing more than (cons 1 (cons 2 (cons 3 "()))). Here "() is an empty list. So a typical list processing function looks like this:

(define (sum lst) (if (null? lst) 0 (+ (car lst) (sum (cdr lst)))))

This function simply sums the elements of a list. This is what many list processing functions look like, in one of them next articles I'll tell you why. For now, I’ll just note that if we replace the first argument in addition with 1, we get a function that calculates the length of the list.

Higher order functions

Since functions can be passed as arguments and returned as a value, it would be nice to find a use for this. Consider the following classic example:

(define (map f lst) (if (null? lst) lst (cons (f (car lst)) (map f (cdr lst)))))

The map function applies the f function to each element of the list. As strange as it may seem, we can now express the function for calculating the length of a list length in terms of sum and map:

(define (length lst) (sum (map (lambda (x) 1) lst)))

If you suddenly decided now that all this is somehow too simple, then let’s think about this: how to implement lists using higher-order functions?

That is, you need to implement the functions cons , car and cdr so that they satisfy the following relation: for any list lst it is true that the value (cons (car lst) (cdr lst)) coincides with lst . This can be done as follows:

(define (cons x xs) (lambda (pick) (if (eq? pick 1) x xs))) (define (car f) (f 1)) (define (cdr f) (f 2))

How it works? Here the cons function returns another function which has one parameter and depending on that returns either the first or second arguments. It is easy to check that the required relationship holds for these functions.

Using quote and metaprogramming

One nice feature of the Lisp language is that it is extremely convenient for writing programs that convert other programs. The point is that a program consists of lists, and a list is the main data structure in the language. There is a way to simply “quote” the text of a program so that it is perceived as a list of atoms.

Atoms are simply character expressions, for example ("hello "world) , which is the same as "(hello world) , or in the full form (quote (hello world)) . Although most Lisp dialects have strings , sometimes you can get by with quote . More importantly, using this approach you can simplify code generation and program processing.

First, let's try to understand symbolic calculations. This is usually understood as computer algebra systems that are capable of handling symbolic objects, formulas, equations and other complex mathematical objects (there are many such systems, the main examples being systems Maple And Mathematica).

You can try to implement symbolic differentiation. I think everyone who is close to finishing school can imagine the rules of differentiation (although in reality everything is a little more complicated - here we will calculate the partial derivative by simply counting other variables are constants, but this does not complicate the matter at all).

So I'll just give an example of code that would show the essence of the matter, leaving the details to the reader (who, I hope, will carefully study the book "Structure and Interpretation of Computer Programs").

(define (deriv exp var) (cond ((number? exp) 0) ((variable? exp) (if (same-variable? exp var) 1 0)) ((sum? exp) (make-sum (deriv ( addend exp) var) (deriv (augend exp) var))) ((product? exp) (make-sum (make-product (multiplier exp) (deriv (multiplicand exp) var)) (make-product (deriv (multiplier exp) var) (multiplicand exp)))) (else (error "unknown expression type - DERIV" exp))))

Here the deriv function is an implementation of the differentiation algorithm as taught in school. This function requires the implementation of the number? functions. ,variable? and so on, which allow us to understand what nature this or that element of expression has. You also need to implement additional functions make-product and make-sum. Here we use the construction cond, which is still unknown to us - this is an analogue switch statement in programming languages ​​such as C and Java.

Before we move on to implementing the missing features, it is worth noting that functional programming quite often uses a top-down development approach. This is when the most general functions are written first, and then small functions that are responsible for implementation details.

(define (variable? x) (symbol? x)) (define (same-variable? v1 v2) (and (variable? v1) (variable? v2) (eq? v1 v2))) (define (make-sum a1 a2) (list "+ a1 a2)) (define (make-product m1 m2) (list "* m1 m2)) (define (sum? x) (and (pair? x) (eq? (car x) "+ ))) (define (addend s) (cadr s)) (define (augend s) (caddr s)) (define (product? x) (and (pair? x) (eq? (car x) "*)) ) (define (multiplier p) (cadr p)) (define (multiplicand p) (caddr p))

The implementation of these functions does not require special comments, with the possible exception of the cadr and caddr functions. These are nothing but functions that return the second and third elements of a list respectively.

If you use the interactive Scheme interpreter, you can easily verify that the resulting code works correctly, but without simplifying the expressions:

(deriv "(+ x 3) "x) => (+ 1 0) (deriv "(* (* x y) (+ x 3)) "x) => (+ (* (* x y) (+ 1 0 )) (* (+ (* x 0) (* 1 y)) (+ x 3)))

For trivial cases (for example, multiplication by 0), the simplification problem is solved quite easily. This question is left to the reader. Most of the examples in this article are taken from the SICP book, so if you have any difficulties, you can simply refer to the source (the book is in the public domain).

Like any dialect, Lisp has great metaprogramming capabilities, mostly related to the use of macros. Unfortunately, this issue requires analysis in a separate article.

Let's write a function that will remove syntactic sugar from the function definition as discussed earlier:

(define (desugar-define def) (let ((fn-args (cadr def)) (body (caddr def))) (let ((name (car fn-args)) (args (cdr fn-args))) (list "define name (list "lambda args body)))))

This function works great with well-formed function definitions:

(desugar-define "(define (succ x) (+ x 1))) => (define succ (lambda (x) (+ x 1)))

However, this doesn't work for regular definitions like (define x 5) .
If we want to remove syntactic sugar in big program containing many different definitions, then we must implement additional check:

(define (sugared? def) (and (eq? (car def) "define) (list? (cadr def))))

Such a check can be built directly into the desugar-define function, making sure that if the definition does not need to remove syntactic sugar, it simply does not change (this trivial exercise is left to the reader). Then you can wrap the entire program in a list and use map:

(map desugar-define prog)

Conclusion

In this article, I did not set myself the task of talking about Scheme in any detail. First of all, I wanted to show several interesting features of the language and attract the reader to study functional programming. This wonderful language, despite its simplicity, has its own charm and features that make programming in it very exciting. As for the tool for working with Scheme, the strong-willed can swing at MIT-Scheme, and the rest - enjoy an excellent learning environment Dr. Racket. In one of the following articles I will definitely tell you how to write your own Scheme interpreter.

The functions are abstractions, in which the implementation details of some action are hidden behind a separate name. A well-written set of functions allows them to be used many times. Standard Library Python comes with a wealth of pre-built and debugged functions, many of which are versatile enough to handle a wide variety of inputs. Even if a certain section of code is not used several times, but in terms of input and output data it is quite autonomous, it can be safely separated into a separate function.

This lecture is more oriented towards practical considerations rather than functional programming theory. However, where necessary, relevant terms will be used and explained.

Next, we will look in detail at the description and use of functions in Python, recursion, passing and returning functions as parameters, sequence processing and iterators, as well as the concept of a generator. It will be demonstrated that in Python, functions are objects (and therefore can be passed as parameters and returned as a result of executing functions). In addition, we will talk about how you can implement some functional programming mechanisms that do not have direct syntactic support in Python, but are widespread in functional programming languages.

What is functional programming?

Functional programming is a programming style that uses only compositions of functions. In other words, this programming in expressions rather than imperative commands.

As David Mertz points out in his article on functional programming in Python, "functional programming - programming in functional languages ​​(LISP, ML, OCAML, Haskell, ...)", the main attributes of which are:

  • “The presence of first-class functions” (functions, like other objects, can be transferred inside functions).
  • Recursion is the main control structure in a program.
  • Processing lists (sequences).
  • Prohibition of side effects in functions, which primarily means the absence of assignment (in “pure” functional languages)
  • Operators are prohibited and the emphasis is on expressions. Instead of operators, the entire program is ideally one expression with accompanying definitions.
  • Key Question: What needs to be calculated, not How.
  • Using functions of higher orders (functions over functions over functions).

Functional program

In mathematics, a function displays objects from the same set ( function definition sets) to another ( set of function values). Mathematical functions (they are called clean) “mechanically”, they unambiguously calculate the result from the given arguments. Pure functions should not store any data between two calls. You can think of them as black boxes, about which you only know what they do, but not at all how.

Functional style programs are designed as composition functions. In this case, functions are understood in almost the same way as in mathematics: they map one object to another. In programming, “pure” functions are an ideal that is not always achievable in practice. Practically useful features usually have by-effect: Save state between calls or change the state of other objects. For example, it is impossible to imagine I/O functions without side effects. Actually, such functions are used for the sake of these “effects”. In addition, mathematical functions easily work with objects that require an infinite amount of information (for example, real numbers). In general, computer