3. Algorithms
3.1 Introduction to Algorithm
In many domains there are key general problems that ask for output with specific properties when given valid input.
To precisely state the problem, using the appropriate structures to specify the input and the desired output. Solve the general problem by specifying the steps of a procedure that takes a valid input and produces the desired output. This procedure is called an algorithm.
3.1.1 Basic Concepts
**Algorithms**: An algorithm is a finite set of precise instructions for performing a computation or for solving a problem.**Pseudocode**: An intermediate step between an English language description of the steps and a coding of these steps using a programming language.
⚠️ Algorithms can be specified in different ways. Their steps can be described in English or in pseudocode. Pseudocodes are clearer to read than natural languages and easier to understand than programming languages. Moreover, pseudocodes are independent from any specific programming languages, so it is more transplantable.**Input**: An algorithm has input values from a specified set.**Output**: From the input values, the algorithm produces the output values from a specified set. The output values are the solution.
And the followings are properties we expect algorithms to have:
**Definiteness**: The steps of an algorithm must be defined precisely.**Correctness**: An algorithm should produce the correct output values for each set of input values.**Finiteness**: An algorithm should produce the output after a finite number of steps for any input.**Effectiveness**: It must be possible to perform each step of the algorithm in a finite amount of time.**Generality**: The algorithm should work for all problems of the desired form.3.1.2 Example Algorithm Problems
**Algorithmic Paradigms**: A general approach based on a particular concept that can be used to construct algorithms for solving a variety of problems.Greedy algorithm
- Brute-force algorithm
- Divide-and-conquer algorithms
- Dynamic programming
- Backtracking
- Probabilistic algorithms
📋 Searching Problems
Locate an element x in a list of finite distinct elements , or determine that it is not in the list.
- Linear Search Algorithm in Pseudocode

- Binary Search Algorithm in Pseudocode
![AX_0VCE%3W950`Z9K]GUDVU.png](/uploads/projects/linguisty@zju_courses/ea9a66f421a192f84b785eda01bca4e0.png)
The binary search algorithm iteratively restricts the relevant search interval until it closes in on the position of the element to be located. It is more efficient than linear search, but it only applies to searching increasing sequences.
📋 Sorting Problems
To sort the elements of a list is to put them in increasing order (numerical order, alphabetic, and so on). It is an important problem with many different algorithms for various situations.
- Bubble Sort Algorithm

- Insertion Sort Algorithm
![ED2]UET1Y~5G(3YR%1}`_1T.png](/uploads/projects/linguisty@zju_courses/766b89c033d85fa26e1af39424973980.png)
📋 Optimization Problems
Minimize or maximize some parameter over all possible inputs.
- Greedy Algorithms
Make the “best” choice at each step.
Making the “best choice” at each step does not necessarily produce an optimal solution to the overall problem, but in many instances, it does. It requires proof that the local best choices can lead to overall optimal solutions in the specific case before using the greedy algorithm.
3.2 The Growth of Functions
In computer science, we want to understand how quickly an algorithm can solve a problem as the size of the input grows, so that we can compare the efficiency of two different algorithms for solving the same problem and determine whether it is practical to use a particular algorithm as the input grows.
3.2.1 Big-O Notations
**Big-O Notations** Let and
be functions. We say that
, read as “
is big-oh of
“ if there exist constants
and
such that
whenever
.
The constants and
are called
**witnesses** to the relationship is
. Only one pair of witnesses is needed to show
, and there are infinite pairs of witnesses if there is a pair.
If and
, we say that the two functions are of the
**same order**.
For many applications, the goal is to select the function in
as small as possible (up to multiplication by a constant, of course).
📋 If and
is larger than
when
is sufficiently large, then
.
3.2.2 Big-O Estimates for some Important Functions

Other functions are usually represented as big-O of these functions.
🌰 e.g.
Use big-O notation to estimate the sum of the first positive integers.
Solution:
So is
taking
.
📋 Big-O Estimates for Polynomials
Let , where
are real numbers, then
. The leading term of a polynomial
dominates its growth.
Proof:
📋 Useful Big-O Estimates Involving Logarithms, Powers, and Exponents
- If
, then
, but
is not
.
- If
and
and
are positive, then
, but
is not
.
- Every positive power of the logarithm of
to the base
, where
, is big-O of every positive power of
, but the reverse relationship never holds.
- Every positive power of the logarithm of
- If
and
is positive, then
is
, but
is not
.
- Every power of
is big-O of every exponential function of
with a base that is greater than one, but the reverse relationship never holds.
- Every power of
- If
, then
, but
is not
.
- if we have two exponential functions with different bases greater than one, one of these functions is big-O of the other if and only if its base is smaller or equal.
📋The Growth of Combinations of Functions
- If
is
and
is
, then
is
.
- If
and
are both
, then
is
.
3.2.3 Other Notations
**Big-Omega Notation**: Let and
be functions. We say that
, read as “
is big-omega of
“, if there are constants
and
such that
whenever
.
. This follows from the definitions.
⚠️ Big-O gives an upper bound on the growth of a function, while Big-Omega gives a lower bound. Big-O tells us that a function grows no faster than another, while Big Omega tells us that a function grows at least as fast as another.**Big-Theta Notation**: Let and
be functions. We say that
, read as “
is big-theta of
“, if and only if there exists constants
,
and
such that
whenever
. That is,
.
⚠️ Sometimes writers are careless and write as if big-O notation has the same meaning as big-Theta.
📋 Big-Theta Estimates for Polynomials
Let , where
with
. Then
is of order
or
.

📋(不考,是课外积累的) Stirling’s Approximation, or more precisely
.
Proof:
3.3 How to Measure Complexity of Algorithms
An algorithm provides a satisfactory solution when the output is correct and efficient. The efficiency is implied by time complexity and space complexity. In this course, we focus on time complexity, and space complexity of algorithms is studied in later courses.
We measure time complexity in terms of the number of operations an algorithm uses, such as comparisons and arithmetic operations (addition, multiplication, etc.). Big-O and big-Theta notations are applied.
We will focus on the **worst-case time complexity** of an algorithm. This provides an upper bound on the number of operations an algorithm uses to solve a problem with an input of a particular size.
⚠️ It is usually much more difficult to determine the **average case time complexity** of an algorithm, because we often have difficulty defining what is the average case.
🌰 e.g. Worst-Case Complexity of Linear Search
Determine the time complexity of the linear search algorithm.
procedure linear search(x:integer, a1, a2, …,an: distinct integers)i := 1while (i ≤ n and x ≠ ai)i := i + 1if i ≤ n then location := ielse location := 0return location{location is the subscript of the term that equals x, or is 0 if x is not found}
Solution:
Count the number of comparisons.
At each step two comparisons are made; and
.
To end the loop, one comparison is made.
After the loop, one more comparison is made.
If ,
comparisons are used. If
is not on the list,
comparisons are made and then an additional comparison is used to exit the loop. So, in the worst case,
comparisons are made.
Hence, the complexity is .
