The definition of functional dependence is the basis. The simplest functional dependencies. Linear pair regression

Functional dependencies. Basic definitions.

In relational databases, datalogical or logical design leads to the development of a database schema, that is, a set of relationship diagrams that adequately model abstract objects subject area and semantic connections between these objects. The basis for analyzing the correctness of a schema is the so-called functional dependencies between database attributes. Some dependencies between relationship attributes are undesirable due to the side effects and anomalies they cause when modifying the database. In this case, by the process of database modification we mean introducing new data into the database or deleting some data from the database, as well as updating the values ​​of some attributes.

However, the logical or datalogical design stage does not end with the design of a relationship diagram. IN general case As a result of this stage, the following resulting documents should be obtained:

  • Description of the conceptual schema of the database in terms of the selected DBMS.
  • Description external models in terms of the selected DBMS.
  • Description of declarative rules for maintaining database integrity.
  • Description of procedures for maintaining the semantic integrity of the database.

However, before describing the constructed scheme in terms of the selected DBMS, we need to build this scheme. It is this process that is dedicated this section. We must build a correct database schema based on relational model data.

DEFINITION

Let's call a database schema correct if there are no unwanted dependencies between relationship attributes.

The process of developing the correct schema for a relational database is called logical database design.

Designing a database schema can be done in two ways:

  • by decomposition (partitioning), when the original set of relations included in the database schema is replaced by another set of relations (their number increases), which are projections of the original relations;
  • by synthesis, that is, by assembling from given initial elementary dependencies between objects of the subject area of ​​a database diagram.

Classic design technology relational databases data is associated with the theory of normalization, based on the analysis of functional dependencies between attributes of relationships. The concept of functional dependence is fundamental in the theory of normalization of relational databases. We will define it further, but for now let’s touch on the meaning of this concept. Functional dependencies define stable relationships between objects and their properties in the subject area under consideration. This is why the process of supporting domain-specific functional dependencies is fundamental to the design process.

The decomposition design process is a process of sequential normalization of relationship patterns, with each subsequent iteration corresponding to a normal form of more high level and has best properties compared to the previous one.

Each normal form corresponds to some specific set constraints, and a relation is in some normal form if it satisfies its inherent set of constraints.

In the theory of relational databases, it is usually distinguished next sequence normal forms:

  • first normal form(1NF);
  • second normal form (2NF);
  • third normal form (3NF);
  • normal Boyce-Codd form (NBFC);
  • fourth normal form (4NF);
  • fifth normal form, or projection-conjunction form (5NF).

Basic properties of normal forms:

  • each subsequent normal form in some sense improves the properties of the previous one;
  • When moving to the next normal form, the properties of the previous normal forms are preserved.

The classical design process is based on a sequence of transitions from a previous normal form to a subsequent normal form. However, in the process of decomposition we are faced with the problem of reversibility, that is, the possibility of restoring the original circuit. Thus, the decomposition must maintain the equivalence of database schemas when replacing one schema with another.

Database schemas are called equivalent, if the contents of the source database can be obtained by naturally connecting the relations included in the resulting schema, and no new tuples appear in the source database.

When performing equivalent transformations, many of the original fundamental functional dependencies between relationship attributes are preserved.

Functional dependencies do not determine the current state of the database, but all its possible states, that is, they reflect those connections between attributes that are inherent in a real object that is modeled using the database.

Therefore, it is possible to determine functional dependencies based on the current state of the database only if the database instance contains absolutely full information(that is, no additions or modifications to the database are expected). IN real life this requirement is impossible to fulfill, therefore the set of functional dependencies is specified by the developer, system analyst, based on a deep system analysis of the subject area.

Let us present a number of basic definitions.

Functional dependence (FD) is a many-to-one relationship between sets of attributes within a given relationship.

Let R be a relation, and A and B are arbitrary subsets of the set of attributes of the relation R. Then B functionally depends on A (A B) if each value of the set A of the relation R determines one value of the set B of the relation R. In other words, if two tuples of the relation R coincide in value A, they also coincide in value B.

The left and right parts of the Federal Law are called the determinant and the dependent part, respectively.

If the FL is satisfied for all possible values ​​of the relation, then it is an integrity constraint for the relation, because at the same time superimposed certain restrictions to all valid values.

If A is a candidate key of a relation R, for example, A is a primary key, then all attributes of the relation R must be functionally dependent on A (this follows from the definition of a candidate key).

The set of Federal Laws can be large, and since Federal Laws are integrity constraints, they must be checked every time the database is updated. Therefore, the task of reducing the set of federal laws to a compact size is urgent.

An obvious way to reduce the set of federal laws is to exclude trivial Federal Law.

Functional dependence trivial, if its right side is a subset of the left side. For example, for a database of suppliers and parts, a trivial federal law:



(PNUM, DNUM)®PNUM

Trivial dependencies cannot fail to be satisfied and therefore are not of practical interest, unlike non-trivial ones, which are integrity constraints. Trivial dependencies can be excluded from many federal laws.

A non-key attribute is any relation attribute that is not part of any relation key.

Mutually independent attributes are those attributes that do not functionally depend on one another.

What is a function? A functional dependence, or function, is a dependence between two variables in which each value of the independent variable corresponds to a single value of the dependent variable. The independent variable is otherwise called an argument, and the dependent variable is said to be a function of this argument. All the values ​​that the independent variable takes form the domain of the function.


There are several ways to specify a function: 1. Using a table. 2.Graphic. 3.Using a formula. The graph of a function is the set of all points of the coordinate plane, the abscissas of which are equal to the values ​​of the argument, and the ordinates are equal to the corresponding values ​​of the function.



A linear function is a function that can be specified by a formula of the form y=kx+b, where x is the independent variable, k and b are given numbers. To plot a graph linear function It is enough to find the coordinates of two points on the graph, mark these points in the coordinate plane and draw a straight line through them. Direct proportionality is a function of the form y=kx, where x is an independent variable, k is not equal to zero number. A direct proportionality graph is a straight line passing through the origin.


Plotting a graph of a linear function To plot a graph of a linear function, you must: - select any two values ​​of the variable x (argument), for example, 0 and 1; - calculate the corresponding values ​​of the variable y (function). It is convenient to write the results obtained in the table x01 y - the obtained points A and B are depicted in the coordinate system; - connect points A and B using a ruler. Example. Let's plot the linear function y = -3 x+6. x01 y63


Inverse proportionality is a function that can be specified by a formula of the form y=k/x, where x is the independent variable and k is a non-zero number. The domain of definition of such a function is the set of all numbers other than zero. If the quantities x and y are inversely proportional, then the functional relationship between them is expressed by the equation y = k / x, where k is some constant value. An inverse proportionality graph is a curved line consisting of two branches. This graph is called a hyperbola. Depending on the sign of k, the branches of the hyperbola are located either in the 1st and 3rd coordinate quarters (k positive), or in the 2nd and 4th coordinate quarters (k negative). The figure shows a graph of the function y = k/x, where k is a negative number.



SPECIAL CASES OF LINEAR FUNCTION. y=kx, k0, b=0 - direct proportionality. The graph is a straight line passing through the origin; y=b, k=0, b0. (b>0, above the OX axis; b 0, above the OX axis; b"> 0, above the OX axis; b"> 0, above the OX axis; b" title=" SPECIAL CASES OF LINEAR FUNCTION. y=kx, k0, b=0 - direct proportionality,. Graph - straight line passing through the origin; y=b, k=0, b0. (b> 0, above the OX axis;"> title="SPECIAL CASES OF LINEAR FUNCTION. y=kx, k0, b=0 - direct proportionality. The graph is a straight line passing through the origin; y=b, k=0, b0. (b>0, above the OX axis; b"> !}

Lecture 3. General concepts and definitions. Classification of functions. Function limit. Infinitely small and infinitely great features. Basic theorems about infinitesimal functions.

Function

When deciding various tasks Usually we have to deal with constant and variable quantities.

Definition

A constant quantity is a quantity that retains the same value either in general or in this process: in the latter case it is called a parameter.

A variable quantity is a quantity that can take on different numerical values.

Concept of function

When studying various phenomena, we usually deal with a set of variable quantities that are interconnected in such a way that the values ​​of some quantities (independent variables) completely determine the values ​​of others (dependent variables and functions).

Definition

A variable quantity y is called a (single-valued) function of a variable quantity x if they are related to each other in such a way that each value of x under consideration corresponds to a unique completely specific value values ​​of y (formulated by N.I. Lobachevsky).

Designation y=f(x) (1)

x– independent variable or argument;

y– dependent variable (function);

f– characteristic of the function.

The set of all values ​​of the independent variable for which the function is defined is called the domain of definition or the domain of existence of this function. The domain of definition of a function can be: a segment, a half-interval, an interval, or the entire numerical axis.

Each radius value corresponds to a circle area value. Area is a function of radius defined over an infinite interval

2. Function (2). Function defined at

For visual representation behavior of a function, construct a graph of the function.

Definition

Function graph y=f(x) is called a set of points M(x,y) plane OXY, whose coordinates are related by this functional dependence. Or the graph of a function is a line whose equation is the equality that defines the function.

For example, the graph of function (2) is a semicircle of radius 2 with its center at the origin.

The simplest functional dependencies

Let's look at a few simple functional dependencies

  1. Direct functional dependence

Definition

Two variables are called directly proportional if when one of them changes in a certain ratio, the other changes in the same ratio.

y=kx, Where k– proportionality coefficient.

Graph of a function

  1. Linear dependence

Definition

Two variables are related linear dependence, if , where are some constant quantities.

Graph of a function

  1. Inverse proportional relationship

Definition

Two variables are called inversely proportional if when one of them changes in some ratio, the other changes in the opposite ratio.

  1. Quadratic dependence

The quadratic dependence in the simplest case has the form , where k is some constant value. The graph of a function is a parabola.

  1. Sinusoidal dependence.

When studying periodic phenomena, the sinusoidal dependence plays an important role

- the function is called a harmonic.

A– amplitude;

Frequency;

Initial phase.

The function is periodic with period. Function values ​​at points x And x+T, differing by period, are the same.

The function can be reduced to the form , Where . From here we get that the harmonic graph is a deformed sinusoid with amplitude A and period T, shifted along the OX axis by the amount

T

Methods for specifying a function

Typically, three ways of specifying a function are considered: analytical, tabular, and graphical.

  1. Analytical method of specifying a function

If a function is expressed using a formula, then it is specified analytically.

For example

If the function y=f(x) is given by a formula, then its characteristic f denotes the set of actions that are needed in in a certain order perform on argument value x to get the corresponding function value.

Example . Three actions are performed on the argument value.

  1. Tabular method of specifying a function

This method establishes correspondence between variables using a table. Knowing the analytical expression of a function, we can represent this function for the argument values ​​that interest us using a table.

Is it possible to move from a tabular function assignment to an analytical expression?

Note that the table does not give all the values ​​of the function, and intermediate values ​​of the function can only be found approximately. This is the so-called interpolation functions. Therefore, in the general case, it is impossible to find an exact analytical expression for a function using tabular data. However, it is always possible to construct a formula, and more than one, which for the values ​​of the argument available in the table will give the corresponding table values functions. This kind of formula is called interpolation.

  1. Graphical way to specify a function

Analytical and tabular methods do not provide a clear idea of ​​the function.

Deprived of this drawback graphic method function assignments y=f(x), when the correspondence between the argument x and function y set using a schedule.

The concept of an implicit function

A function is called explicit if it is given by a formula whose right-hand side does not contain the dependent variable.

Function y from argument x is called implicit if it is given by the equation

F(x,y)=0(1) unresolved regarding the dependent variable.

Concept inverse function

Let the function be given y=f(x)(1). By specifying the values ​​of the argument x, we obtain the values ​​of the function y.

It is possible, considering y argument, and X– function, set values y and get values x. In this case, equation (1) will determine x, as an implicit function of y. This last function called reverse in relation to this function y.

Assuming that equation (1) is resolved with respect to x, we obtain an explicit expression for the inverse function

(2), where the function is for all acceptable values y satisfies the condition

The uniqueness constraints imposed by the primary and candidate key declarations of a relation are a special case of the constraints associated with the concept functional dependencies.

To explain the concept of functional dependence, consider the following example.

Let us be given a relation containing data about the results of one specific session. The diagram of this relationship looks like this:

Session ( Grade book no. , Full Name, Item , Grade);

The attributes “Gradebook No.” and “Subject” form a composite (since two attributes are declared as a key) primary key of this relationship. Indeed, from these two attributes one can unambiguously determine the values ​​of all other attributes.

However, in addition to the uniqueness constraint associated with this key, the relationship must necessarily be subject to the condition that one grade book must be issued to one to a specific person and therefore in this regard tuples with same number the grade book must contain same values attributes “Last name”, “First name” and “Patronymic”.


If we have the following fragment of some specific student database educational institution after some session, then in the tuples with the grade book number 100, the attributes “Last Name”, “First Name” and “Patronymic” coincide, but the attributes “Subject” and “Grade” do not coincide (which is understandable, because in them we're talking about about different subjects and performance in them). This means that the attributes “Last Name”, “First Name” and “Patronymic” functionally dependent from the attribute “Grade book number”, and the attributes “Subject” and “Grade” are functionally independent.

Thus, functional dependence is an unambiguous dependency tabulated in database management systems.

Now let's give a strict definition of functional dependence.

Definition: let X, Y be subschemes of the schema relations S that define over the schema S functional dependency diagram X > Y(read “X arrow Y”). Let's define inv functional dependency restrictions > Y> as a statement that, in relation to the schema S, any two tuples that coincide in the projection to the subschema X must also coincide in the projection to the subschema Y.

Let's write the same definition in formal form:

Inv > Y> r(S) = t 1 , t 2 ? r(t 1 [X] = t 2 [X] ? t 1 [Y] = t 2 [Y]), X, Y? S;

It is interesting that this definition uses the concept unary operation projection that we have encountered before. Indeed, how else, if not using this operation, can you show that two columns of a relation table, rather than rows, are equal to each other? Therefore, we wrote in terms of this operation that the coincidence of tuples in the projection onto some attribute or several attributes (subschema X) certainly entails the coincidence of the same tuple columns on subschema Y if Y functionally depends on X .

It is interesting to note that in the case of a functional dependence of Y on X, they also say that X functionally defines Y or what Y functionally dependent from X. In a functional dependence diagram X > Y, subcircuit X is called the left part, and subcircuit Y is called the right part.

In database design practice, a functional dependency diagram is usually referred to as a functional dependency diagram for brevity.

End of definition.


In the special case when the right side of the functional dependence, i.e., the subschema Y, coincides with the entire relation schema, the functional dependence constraint becomes a uniqueness constraint for the primary or candidate key. Really:

Inv<K > S> r(S) = ? t 1 , t 2 ? r(t 1 [K] = t 2 [K] > t 1 (S) = t 2 (S)), K ? S;

It’s just that in defining a functional dependence, instead of the subcircuit X, you need to take the key designation K, and instead of the right side of the functional dependence, subcircuit Y, you need to take the entire relationship diagram S, i.e., indeed, the uniqueness constraint for relation keys is a special case of the functional dependence constraint when the right side is equal functional dependence schemes to the entire relational scheme.

Here are examples of functional dependence images:

(grade book number) > (Last name, First name, Patronymic);

(grade book no., Subject) > (Grade);

2. Armstrong's inference rules

If any basic relation satisfies vector-defined functional dependencies, then, using various special inference rules, it is possible to obtain other functional dependencies that this basic relation will certainly satisfy.

A good example of such special rules are Armstrong's inference rules.

But before we begin to analyze Armstrong’s rules of inference themselves, let us introduce into consideration a new metalinguistic symbol “+”, which is called symbol of a meta-statement about deducibility. When formulating rules, this symbol is written between two syntactic expressions and indicates that the formula to the right of it is derived from the formula to the left of it.

Let us now formulate Armstrong's inference rules themselves in the form of the following theorem.

Theorem. The following rules, called Armstrong's rules of inference, are valid.

Inference rule 1.+ X > X;

Inference rule 2. X > Y+ X ? Z > Y;

Inference rule 3. X > Y, Y ? W > Z + X ? W > Z;

Here X, Y, Z, W are arbitrary subschemas of the relation scheme S. The symbol of a meta-statement about deducibility separates lists of premises and lists of statements (conclusions).

1. The first inference rule is called “ reflexivity” and reads as follows: “the rule is derived: “X functionally entails X.” This is the simplest of Armstrong's rules of inference. It literally comes out of thin air.

It is interesting to note that functional dependence, which has both left and right sides, called reflective. According to the reflexivity rule, the restriction of reflexive dependence is satisfied automatically.

2. The second inference rule is called “ replenishment” and reads this way: “if X functionally determines Y, then the rule is derived: “the union of subcircuits X and Z functionally entails Y.” The replenishment rule allows you to expand left side restrictions on functional dependencies.

3. The third rule of inference is called “ pseudotransitivity” and reads as follows: “if subcircuit X functionally entails subcircuit Y and the union of subcircuits Y and W functionally entails Z, then the rule is derived: “the union of subcircuits X and W functionally determines subcircuit Z.”

The pseudotransitivity rule generalizes the transitivity rule corresponding to the special case W: = 0. Let us give a formal representation of this rule:

It should be noted that the premises and conclusions given earlier were presented in abbreviated form using the designations of functional dependence schemes. In extended form they correspond to the following restrictions functional dependencies.

Inference rule 1. inv X>r(S);

Inference rule 2. inv Y> r(S) ? inv Y>r(S);

Inference rule 3. inv Y> r(S) & inv Z> r(S) ? inv Z>r(S);

Let's carry out proof these inference rules.

1. Proof of the rule reflexivity follows directly from the definition of the restriction of functional dependence when substituting subcircuit X instead of subcircuit Y.

Indeed, let's take the functional dependence constraint:

Inv Y> r(S) and substitute X instead of Y into it, we get:

Inv X> r(S), and this is the rule of reflexivity.

The rule of reflexivity has been proven.

2. Proof of the rule replenishment Let's illustrate it with functional dependence diagrams.

The first diagram is the premise diagram:

package: X>Y


Second diagram:

conclusion: X ? Z>Y


Let the tuples be equal on X? Z. Then they are equal on X. According to the premise, they will be equal on Y.

The replenishment rule has been proven.

3. Proof of the rule pseudotransitivity We will also illustrate with diagrams, of which there will be three in this particular case.

The first diagram is the first premise:

premise 1: X > Y


premise 2: Y ? W>Z


And finally, the third diagram is the conclusion diagram:

conclusion: X ? W>Z


Let the tuples be equal on X? W. Then they are equal on both X and W. According to Premise 1, they will be equal on Y. Hence, according to Premise 2, they will be equal on Z.

The pseudotransitivity rule has been proven.

All rules have been proven.

3. Derived inference rules

Another example of rules with the help of which it is possible, if necessary, to derive new rules of functional dependence, are the so-called derived inference rules.

What are these rules, how are they obtained?

It is known that if from some rules that already exist, legal logical methods derive others, then these new rules, called derivatives, can be used alongside the original rules.

It should be specially noted that these very arbitrary rules are “derived” precisely from the Armstrong inference rules we went through earlier.

Let us formulate the derived rules for inferring functional dependencies in the form of the following theorem.

Theorem.

The following rules are derived from Armstrong's inference rules.

Inference rule 1.+X? Z > X;

Inference rule 2. X > Y, X > Z + X ? Y > Z;

Inference rule 3. X > Y ? Z + X > Y, X > Z;

Here X, Y, Z, W, as in the previous case, are arbitrary subschemes of the relation scheme S.

1. The first derived rule is called the rule of triviality and reads as follows:

“The rule is derived: “the union of subcircuits X and Z functionally entails X.”

A functional dependence with the left side being a subset of the right side is called trivial. According to the triviality rule, trivial dependency constraints are satisfied automatically.

Interestingly, the triviality rule is a generalization of the reflexivity rule and, like the latter, could be derived directly from the definition of the functional dependence constraint. The fact that this rule is derivative is not accidental and is associated with the completeness of Armstrong’s system of rules. We'll talk more about the completeness of Armstrong's system of rules a little later.

2. The second derived rule is called additivity rule and reads as follows: “If subcircuit X functionally defines subcircuit Y, and X simultaneously functionally defines Z, then from these rules the following rule is derived: “X functionally defines the union of subcircuits Y and Z.”

3. The third derived rule is called rule of projectivity or the rule " reversal of additivity" It reads as follows: “If subcircuit X functionally defines the union of subcircuits Y and Z, then from this rule the rule is derived: “X functionally defines subcircuit Y and at the same time X functionally defines subcircuit Z,” i.e., indeed, this is a derived rule is the inverse of the additivity rule.

It is curious that the rules of additivity and projectivity as applied to functional dependencies with identical left-hand sides allow us to combine or, conversely, split the right-hand sides of the dependence.

When constructing chains of inference, after the formulation of all premises, the rule of transitivity is applied in order to include a functional dependence with the right side located in the conclusion.

Let's carry out proof the listed arbitrary inference rules.

1. Proof of the rule triviality.

Let us carry it out, like all subsequent proofs, step by step:

1) we have: X > X (from Armstrong’s rule of reflexivity of inference);

The triviality rule has been proven.

2. Let's carry out a step-by-step proof of the rule additivity:

1) we have: X > Y (this is premise 1);

2) we have: X > Z (this is premise 2);

3) we have: Y ? Z > Y ? Z (from Armstrong's rule of reflexivity of inference);

4) we have: X? Z > Y ? Z (obtained by applying the rule of pseudotransitivity of Armstrong’s derivation, and then as a consequence of the first and third steps of the proof);

5) we have: X? X > Y ? Z (obtained by applying Armstrong’s pseudotransitivity rule, and then follows from the second and fourth steps);

6) we have X > Y? Z (follows from step five).

The additivity rule has been proven.

3. And finally, we will construct a proof of the rule projectivity:

1) we have: X > Y? Z, X > Y ? Z (this is a parcel);

2) we have: Y > Y, Z > Z (derived using Armstrong’s rule of reflexivity of inference);

3) we have: Y ? z > y, Y ? z > Z (obtained from Armstrong’s derivation completion rule and a corollary from the second step of the proof);

4) we have: X > Y, X > Z (this is obtained by applying the rule of pseudotransitivity of Armstrong’s derivation, and then as a consequence of the first and third steps of the proof).

The rule of projectivity has been proven.

All derived inference rules have been proven.

4. Completeness of Armstrong's system of rules

Let F(S) - a given set of functional dependencies defined over a relation diagram S.

Let us denote by inv <F(S)> the limitation imposed by this set of functional dependencies. Let's write it down:

Inv <F(S)> r(S) = ?X > Y ? F(S) [inv Y> r(S)].

So, this set of restrictions imposed by functional dependencies is deciphered as follows: for any rule from the system of functional dependencies X > Y, belonging to the set of functional dependencies F(S), functional dependency restriction inv is in effect Y> r(S), defined over a set of relations r(S).

Let some attitude r(S) satisfies this constraint.

Applying Armstrong's rules of inference to functional dependencies defined for a set F(S), you can obtain new functional dependencies, as we have already said and proven earlier. And, which is significant, the limitations of these functional dependencies are related F(S) will automatically satisfy, as can be seen from the extended form of writing Armstrong's inference rules. Let us remind you general form these extended inference rules:

Inference rule 1. inv < X >X> r(S);

Inference rule 2. inv Y> r(S) ? inv ? Z>Y> r(S);

Inference rule 3. inv Y> r(S) & inv ? W>Z> r(S) ? inv ? W>Z>;

Returning to our reasoning, let us complete the set F(S) new dependencies derived from it using Armstrong’s rules. We will apply this replenishment procedure until we no longer obtain new functional dependencies. As a result of this construction, we obtain a new set of functional dependencies, called short circuit sets F(S) and denoted F+(S).

Indeed, this name is quite logical, because we ourselves, through lengthy construction, “closed” many existing functional dependencies on ourselves, adding (hence the “+”) all new functional dependencies resulting from the existing ones.

It should be noted that this process of constructing a closure is finite, because the relational scheme itself, on which all these constructions are carried out, is finite.

It goes without saying that the closure is a superset of the set being closed (indeed, it is larger!) and does not change at all when it is closed again.

If we write down what was just said in formal form, we get:

F(S) ? F + (S), [F + (S)] + = F + (S);

Further, from the proven truth (i.e. legality, legality) of Armstrong's rules of inference and the definition of closure, it follows that any relation that satisfies the constraints of a given set of functional dependencies will satisfy the constraint of the dependence belonging to the closure.

X > Y ? F + (S) ? ?r(S) [inv <F(S)> r(S) ? inv Y> r(S)];

So, Armstrong's completeness theorem for the system of inference rules states that external implication can be completely legitimately and justifiably replaced by equivalence.

(We will not consider the proof of this theorem, since the proof process itself is not so important in our specific lecture course.)

A relational database contains both structural and semantic information. The structure of a database is determined by the number and type of relationships it contains, and the one-to-many relationships that exist between the tuples of these relationships. The semantic part describes the set of functional dependencies that exist between the attributes of these relationships. Let us define functional dependence.

Definition: If two attributes X and Y of some relation are given, then Y is said to functionally depend on X if at any moment of time each value of X corresponds to exactly one value of Y. The functional dependence is denoted by X -> Y. Note that X and Y can represent are not only single attributes, but also groups made up of several attributes of one relationship. We can say that functional dependencies are one-to-many relationships that exist within a relationship.

    2nd normal form (2NF) relationship. Determination of complete functional dependence and 2NF. Characteristics of relationships in 2NF. Algorithm for reduction to 2NF. Heath's theorem. Examples.

Conceptcomplete functional dependence.

Definition: non-key attribute fully functionally dependent from a composite key if it is functionally dependent on the entire key as a whole, but is not functionally dependent on any of its constituent attributes.

Definition: excessive functional dependence- a dependency that contains information that can be obtained on the basis of other dependencies available in the database.

2NF - second normal form.

Definition of second normal form: a relation is in 2NF, if it is in 1NF and each non-key attribute is functionally fully dependent on the key.

A database schema that does not have redundant functional dependencies is considered correct. Otherwise, you have to resort to the procedure of decomposition (decomposition) of the existing set of relations. In this case, the generated set contains a larger number of relations, which are projections of the relations of the original set. (The projection operation is described in the section on relational algebra.) The reversible step-by-step process of replacing a given set of relations with another scheme, eliminating redundant functional dependencies, is called normalization.

The reversibility condition requires that the decomposition preserve the equivalence of circuits when replacing one circuit with another, i.e. in the resulting relationships:

1) previously missing tuples should not appear;

2) the relationships of the new schema must satisfy the original set of functional dependencies.

Heath's theorem

Let the relation be given.

If r satisfies the functional dependence, then it is equal to the union of its projection and

    3rd normal form (3NF) relation. Definition of transitive dependence and 3NF. Algorithm for reduction to 3NF. Boyce-Codd Normal Form (BCNF). Definition and algorithm for reduction to BCNF. Characteristics of relationships in 3NF and in NFBC. Examples.