Spatial and temporal complexity

For most problems, there are many different solution algorithms. Which one to choose for a specific task? The efficiency of a program (code) is a very important characteristic of it. In programming, they always try to choose the most effective solution and evaluate the limits of applicability of the selected algorithm[1].

Consider example 1 of such an estimate. A real estate agency has a database of n entries, with each entry containing one offer (which is available) and one request (which is required) for properties. It is required to select exchange options – to find all such pairs of records in which the sentence of the first record coincides with the request of the second record and vice versa. Let’s say that comparing one pair of records takes one millisecond. Then, when looking for a solution

in a simple way (“head-on” algorithm) – each record is compared with all others – it will take n (n-1) / 2 comparisons.

If the database has n=100, then the solution will be obtained in 4.95 seconds. But if n=100,000 (a more realistic option), then the time to obtain a solution will be 4,999,950 seconds, 1389 hours, 58 days, 2 months. Moreover, this is an estimate of the time for selecting direct options, and in real life the number of exchange participants is most often more than two.

This example shows that when choosing an algorithm for solving a problem, it is necessary to evaluate the size of the problem (number of input data) and the efficiency of the algorithm that solves this problem.

The efficiency of the algorithm is evaluated by two parameters: by the running time and the required amount of memory.

Memory complexity (spatial/capacitive complexity)

algorithm – the amount of memory required to execute this algorithm, depending on the size of the input data.

Computers have a limited amount of memory. If two programs implement identical functions, then the one that uses less memory will be more efficient.

In the beginning (70-80 years) the amount of memory was the dominant factor in choosing an algorithm. Then, due to the rapid reduction in the cost of memory, this estimate of the efficiency of the algorithm gradually lost its significance. However, at present, due to the widespread use of various portable devices that perform, among other things, the functions of a computer, the memory requirements are again relevant.

The time complexity of an algorithm is the dependence of the number of operations performed by the algorithm on the size of the input data.

Note: the running time of the same algorithm in seconds (minutes, etc.) may differ for computers with different speeds, so the number of operations in the algorithm is used for a universal assessment.

When calculating, only essential operations are taken into account – operations of comparing two values, addition, subtraction, multiplication, division, MOD, DIV, calculation of the values of Boolean operations OR, NOT, AND.

Calculation operations SIN, COS, EXP, LOG, etc. are evaluated through the number of additions and multiplications, since their calculation for specific values is realized by expansion into a series.

Assignment operations and operations with a counter are not taken into account when organizing a loop, because their execution takes much less time, and their share in the total number of operations falls with a significant increase in the size of the task.

For a specific algorithm, the calculation of the exact form of the time complexity function is quite laborious, therefore, in practice, instead of exact formulas, comparative estimates of the behavior of the complexity function with increasing input data size are used.

Let N be the size of the input data of the algorithm.

Then the asymptotic complexity of the algorithm is the behavior of the complexity function with increasing N.

Mathematically, the asymptotic complexity is calculated using O-functions (we will use “Big O”). O-functions express the relative speed of an algorithm depending on some variable (or variables).

Definition. The function f(n) is of order O(g(n)) if there is a constant K and a counter n0 such that f(n) ≤ K*g(n), for n>n0.

Basic relations for O-functions:

1) O(k*f) = O(f), where k is some constant

2) O(f*g) = O(f)*O(g) or O(f/g)=O(f)/O(g)

3) O(f+g) is equal to the dominant of O(f) and O(g)

For example: 1.5*N=O(N),

O((17*N)*N) = O(17*N)*O(N) = O(N)*O(N)=O(N*N) = O(N²)

O(N 5 +N 2 )=O(N 5 ).

Difficulty classes

In the theory of algorithms, complexity classes are sets of computational problems that are approximately the same in terms of computational complexity. More narrowly, complexity classes are sets of predicates (functions that take a word as input and return a response of 0 or 1) that use approximately the same amount of resources to calculate [1].

For each class, there is a category of tasks that are “most difficult”. This means that any task from a class is reduced to such a task, and, moreover, the task itself lies in the class. Such problems are called complete problems for a given class. The most famous are NP-complete problems.

Each complexity class (in the narrow sense) is defined as a set of predicates that have certain properties. A typical definition of a complexity class looks like this: a complexity class X is a set of predicates P(x) that are computable on Turing machines and use an O(f(n)) resource to compute, where n is the length of the word x.

The resources are usually taken as computation time (the number of working cycles of the Turing machine) or the working area (the number of cells used on the tape during operation). Languages recognized by predicates from a certain class (that is, the set of words for which the predicate returns 1) are also said to belong to the same class. In addition, many classes can also be described in terms of mathematical logic.

All complexity classes are in a hierarchical relationship: some include others. However, for most inclusions, it is not known whether they are strict. One of the most famous open problems in this area is the equality of classes P and NP. If this assumption is correct (which most scientists doubt), then the class hierarchy shown on the right will be strongly collapsed. At the moment, the most common hypothesis is that the hierarchy is non-degenerate (that is, all classes are different). Also, it is known that EXPSPACE is not equal to the PSPACE class.

Consider a function f and an input string of length n. Then the class DTIME(f(n)) is defined as the class of languages accepted by deterministic Turing machines that finish their work in time not exceeding f(n). The class NTIME(f(n)), in turn, is defined as a class of languages accepted by a non-deterministic Turing machine that completes its work in time not exceeding f(n). Note that there are no memory restrictions when defining these classes.

Similarly to the time hierarchy, a memory hierarchy is introduced. The class DSPACE(f(n)) denotes a class of languages accepted by deterministic Turing machines using at most f(n) memory locations on working tapes. The NSPACE(f(n)) class is defined as a class of languages accepted by non-deterministic Turing machines using at most f(n) memory locations on work tapes. There are no time limits when defining these classes.

Other similar classes considered above are defined similarly. Here are the abbreviations used:

D – deterministic (deterministic)

N – non-deterministic

R – probabilistic with limited one-sided error

B – probabilistic with limited two-sided error

BQ – quantum with bounded two-sided error

O-complexity of algorithms

O(1) Most operations in a program are executed only once, or only a few times. Algorithms of constant complexity. Any algorithm that always requires the same amount of time regardless of data size has constant complexity[1].

O(N) The running time of a program is linear, usually when each element of the input needs to be processed only a linear number of times.

O(N 2 ), O(N 3 ), O(N a ) Polynomial complexity. O(N 2 )-quadratic complexity, O(N 3 )-cubic complexity.

O(Log(N)) When a program’s running time is logarithmic, the program starts to run much slower as N increases. This kind of running time usually occurs in programs that divide a large problem into small ones and solve them separately.

O(N*log( N)) Those algorithms that divide a large problem into small ones, and then, having solved them, connect their solutions, have such a running time.

O(2 N ) Exponential complexity. Such algorithms most often result from an approach called brute force.

The programmer must be able to analyze algorithms and determine their complexity. The time complexity of an algorithm can be calculated based on the analysis of its control structures.

Algorithms without loops and recursive calls have constant complexity. If there is no recursion and loops, all control structures can be reduced to structures of constant complexity. Consequently, the entire algorithm is also characterized by constant complexity. Determining the complexity of an algorithm basically comes down to analyzing loops and recursive calls.

For example, consider an algorithm for processing array elements.

For i:=1 to N do

Begin

end;

The complexity of this algorithm is O(N), because the body of the loop is executed N times, and the complexity of the loop body is O(1).

If one loop is nested within another and both loops depend on the size of the same variable, then the whole construction is characterized by quadratic complexity.

For i:=1 to N do

For j:=1 to N do

Begin

end;

The complexity of this program is O(N2).

Let’s evaluate the complexity of the Pythagorean Triples program. There are two ways to analyze the complexity of an algorithm: ascending (from internal to external control structures) and downward (from external to internal).

O(H)=O(1)+O(1)+O(1)=O(1);

O(I)=O(N)*(O(F)+O(J))=O(N)*O(condition dominants)=O(N);

O(G)=O(N)*(O(C)+O(I)+O(K))=O(N)*(O(1)+O(N)+O(1))=O ( N2 );

O(E)=O(N)*(O(B)+O(G)+O(L))=O(N)* O(N 2 )= O(N 3 );

O(D)=O(A)+O(E)=O(1)+ O(N 3 )= O(N 3 )

The complexity of this algorithm is O(N 3 ).

As a rule, about 90% of a program’s runtime requires repetition and only 10% is actual computation. An analysis of the complexity of programs shows which fragments fall into these 90% – these are the cycles of the greatest nesting depth. Repetitions can be organized as nested loops or nested recursion.

This information can be used by the programmer to build a more efficient program in the following way. First of all, you can try to reduce the nesting depth of repetitions. Then consider reducing the number of statements in the most nested loops. If 90% of the execution time is the execution of inner loops, then a 30% reduction in these small sections results in a 90% * 30% = 27% reduction in the execution time of the entire program.

A separate section of mathematics deals with the analysis of the effectiveness of algorithms, and finding the most optimal function is not so easy. Let’s evaluate the binary search algorithm in an array – dichotomy.

The essence of the algorithm: we go to the middle of the array and look for the correspondence of the key to the value of the middle element. If we can’t find a match, we look at the relative key size and value of the middle element and then move to the bottom or top half of the list. In this half, we again look for the middle and again compare it with the key. If it doesn’t work, we again divide the current interval by half.

function search(low, high, key: integer): integer;

var

mid, data: integer;

begin

while low<=high do

begin

mid:=(low+high) div 2;

data:=a[mid];

if key=data then

search:=mid

else

if key < data then

high:=mid-1

else

low:=mid+1;

end;

search:=-1;

end;

Decision:

The first iteration of the loop deals with the entire list. Each subsequent iteration halves the size of the sublist. So, the sizes of the list for the algorithm are

nn/2 1 n/2 2 n/2 3 n/2 4 … n/2 m

Eventually there will be an integer m such that

n/ 2m < 2 or n < 2 m +1

Since m is the first integer for which n/2 m <2 then must be true

n/2 m-1 >= 2 or 2 m =< n

It follows that

2 m = < n < 2 m +1

We take the logarithm of each part of the inequality and get

m=

The value m is the largest integer that =

So, O(log n).

Usually the problem being solved has a natural “size” (usually the amount of data it processes) which we call N. Ultimately, we would like to get an expression for the time it takes the program to process data of size N, as a function of N. Usually, we are not interested in the average case – the expected running time of the program on “typical” inputs, and the worst case is the expected running time of the program on the worst possible inputs.

Some algorithms are well understood in the sense that exact mathematical formulas are known for the mean and worst cases. Such formulas are developed by carefully examining programs in order to find the running time in terms of mathematical characteristics, and then performing their mathematical analysis.

A few important reasons for this kind of analysis are:

1. Programs written in a high-level language are translated into machine codes, and it can be difficult to understand how long it will take to execute a particular statement.

2. Many programs are very sensitive to input data, and their effectiveness can be very dependent on them. The average case may turn out to be a mathematical fiction unrelated to the data on which the program is used, and the worst case is unlikely.

Best, average and worst cases play a very big role in sorting. The amount of calculations when sorting

O-complexity analysis has become widespread in many practical applications. However, its limitations must be clearly understood.

The main disadvantages of the approach include the following:

1) for complex algorithms, obtaining O-estimates, as a rule, is either very laborious or almost impossible,

2) it is often difficult to determine the complexity “on average”,

3) O-estimates are too coarse to reflect finer differences between algorithms,

4) O-analysis provides too little information (or none at all) to analyze the behavior of algorithms when processing small amounts of data.

Determining complexity in O-notation is a far from trivial task. In particular, the efficiency of a binary search is determined not by the depth of nesting of loops, but by the way each successive attempt is chosen.

Another difficulty is the definition of “average case”. Usually it is quite difficult to do this because of the impossibility of predicting the operating conditions of the algorithm. Sometimes an algorithm is used as a fragment of a large, complex program. Sometimes the performance of the hardware and/or operating system, or some component of the compiler significantly affects the complexity of the algorithm. Often the same algorithm can be used in many different applications.

Due to the difficulties associated with analyzing the time complexity of an algorithm “on average”, one often has to be content with estimates for the worst and best cases. These scores essentially define the lower and upper bounds on “average” complexity. Actually, if it is not possible to analyze the complexity of the algorithm “on average”, it remains to follow one of Murphy’s laws, according to which the estimate obtained for the worst case can serve as a good approximation of the complexity “on average”.

Perhaps the main disadvantage of O-functions is their excessive roughness. If Algorithm A completes some task in 0.001*N s, while Algorithm B takes 1000*N s to solve it, then B is a million times faster than A. Nevertheless, A and B share the same same time complexity O(N).

Most of this lecture was devoted to the analysis of the time complexity of algorithms. There are other forms of complexity that should not be forgotten: spatial and intellectual complexity.

Space complexity as the amount of memory required to execute a program has already been mentioned earlier.

When analyzing the intellectual complexity of an algorithm, the understandability of algorithms and the complexity of their development are investigated.

All three forms of complexity are usually interrelated. As a rule, when developing an algorithm with a good time complexity estimate, one has to sacrifice its spatial and/or intellectual complexity. For example, the quicksort algorithm is significantly faster than the selection sort algorithm. The payoff for increased sorting speed comes in terms of more memory needed for sorting. The need for additional memory for quicksort is due to multiple recursive calls.

The quicksort algorithm is also characterized by greater intellectual complexity compared to the insertion sort algorithm. If you ask a hundred people to sort a sequence of objects, then it is likely that most of them use the selection sort algorithm. It is also unlikely that any of them will use quicksort. The reasons for the greater intellectual and spatial complexity of quicksort are obvious: the algorithm is recursive, it is rather difficult to describe it, the algorithm is longer (meaning the program text) than simpler sorting algorithms.

Be First to Comment

Leave a Reply

Your email address will not be published.