【Algorithm】 Coding test/interview

  • Coding test/interview

Python

  1. mutable vs Immutable [post1]
    1. import copy
    2. 메모리 공유 (1) =
    3. 딕,리스트 내부 딕 리스트는 메모리 공유 (2) .copy(), copy.copy(), [:]
    4. 딕,리스트 내부 딕 리스트까지 메모리 재할당(3) copy.deepcopy()
  2. Initializing a two-dim list: a = [ [0]*3 for i in range(3)]
  3. collections.namedtuple, collections.deque
  4. copilot 사용법:
    1. Let the copilot know what types of methods or strategies to use exactly. E.g. “Check if the list is repeating using a set of names.”
    2. Ask the copilot step by step. E.g.\ [No] # return the player at the given rank ⇒ \ [YES] (1) #Sort self.standings by score, then games_played (2) #return the player at the given rank

1. Recursion

  • recursive function: a function calls itself.
    • Base case: to stop the recursion
    • Recursive case: where the function calls on itself
  • Examples
    • Easy: Factorial, Fibonacci number, Euclid method
    • Hard: Towers of Hanoi, Maze, Cells in a Blob, N-Queens, Permutation
  • Any problem solved recursively can also be solved through the use of iteration.
  • 🪡 Binary search
    • Finding a needed item from a sorted list, from the right or left. [[logN, bisect]
    • left right valuable → calculate middle and use it → break when (left≥right)
  • 🪡 Backtracking: (1) non-promising:fail (2) success:return (3) visit recursion.
  • 🪡 State-space tree

2. Sort

  • Selection sort
    • Find the largest value in the list.
  • Bubble sort
    • left to right. Push back a larger one among 2 items. x N times
  • Insertion sort
    • left to right. The item finds its position in the its-left list.
  • Merge sort
    • not in-place
    • divide and conquer
    • mergesort(left) / mergesort(right) / merge
  • Quick sort
    • in-place
    • pivot
    • From the left, find an item smaller than the pivot. From the right, find an item larger than the pivot.
  • Heap sort
    • in-place
    • Max heap / Min heap / complete binary trees / Max-heapify(: only top node is positioned incorrectly according to max-heap rules)
    • Main operation:
      1. Build heap: From non-leaf (internal) node, apply max-heapify.
      2. Insert: bottom 2 top
      3. Delete: Delete top node. Last leaf node is placed on the top. Compare top 2 bottom

3. Search tree

  • Linked structure: (1) Key(data) / (2,3) left+right child address / (4) parent address
  • Tree traversal
    • Recursively: Inorder(left root right) / Preorder(root left right) / Postorder (left right root)
  • Binary search tree (=ordered tree)
    • Definition: Internal nodes each store a key greater() than all the keys in the node’s left subtree and less() than those in its right subtree.
    • (oper1) Search: O(hight=logN), min(leftmost), max(rightmost), Successor, Predecessor
    • (oper2) Insert: O(hight=logN), Just need to find empty leaf.
    • (oper3) Delete: if the number of child node is 0,1, no problem. But if that is 2, need to find successor
    • Weakness: worst-case → O(N) → Keep balanced → red-black tree
  • Red-black tree
    • It is a kind of Binary search tree.
    • Additional properties
      1. Every node is either red or black.
      2. All NIL nodes (leaf node + root node) are considered black.
      3. A red node does not have a red child. (Red can not exist consecutively)
      4. Every path from a given node to any of its descendant NIL nodes goes through the same number of black nodes. [Example image]
        • h: Hight from itself to NIL - bh: From itself to NIL, the number of black nodes (including itselfs)
    • Left and Right rotation are key operations when doing Search, Insert, Delete.

4. Hasing

  • Hash table: Key and value
  • Solutions of Collision:
    • Chaining (linked list): worst-case → value unbalance problem
    • Open addressing (Finding other empty space)
      1. linear probing: h(key) + i. Clustering problem
      2. quadratic probing: h(key) + i^2
      3. double hashing: use two different hash functions
        • A critical problem of the above methods is the difficulty to delete & search.
  • Good hash function is an almost random distribution function.

5. Graph - DFS,BFS

  • Node, Edge, and adjacency matrix.
  • Graph traversal [Reference image]
    • BFS (Breadth-First Search)
      • Queue: [pseudo code] (1) While queue is not empty (2) Remove a node v (3) For each unchecked neighbor w_s of v (4) Insert w into the queue and check it.
      • Shortest path [pseudo code]: Needed valuables are d (distance) and π (predecessor, previous node).
    • DFS (Depth-First Search)
      • Recursive: [pseudo code] (1) DFS(G,v) (2) Check v (3) For each adjacent nodes w_s (4) If unchecked w, DFS(G,w)

6. Graph2 - Minimum spanning tree

  • Node, Edge, Weight(cost) → To connect all nodes, select minimum edges with minimum cost.
  • Properties: (1) non unique solution (2) non-cycle (3) edges of solutions can represent tree-structure (4) # edges = 1-#nodes
  • Generic MST algorithms
    • Goal: Finding a safe edge, which can be contained to A (one MST solution)
    • Prerequisite: (1) Cut: 2subset S, V-S of nodes (2) Cross: edge cross 2subset (3) Respect: no cross edges in A
    • Fact: If A is a subset of MST, the lightest edge, among edges cross the cut, respect A.
  • Kruskal algorithm
    • Method: Sorted edges are sequentially selected, unless the edge makes a cycle, untill #nodes-1 == #edges.
    • Proof: Sorting = the lightest edge / no cycle edge = Cross edge
    • Key: checking whether a cycle exists. (1) Connected nodes represent a subset, e.g. [{a},{b,c,f},{e},{g,h}] (2) Using linked list, check wheter the root is same.
  • Prim algorithm
    • Method: From a random node, select the lightest cross-edge.
    • Key: finding cross-edge: nodes in subset V-S need to contain two valuable, key and pie. ‘key’ is the lightest weight connecting with nodes in S. ‘pie’ is the connected node in S.
    • Details: (1) Among nodes in V-S, find a node with the lightest ‘key’ value. (2) the node is inserted into S, nodes in V-S connected to the node update the value of ‘key’ and ‘pip’.
    • Addition: Methods(1) needs to use Priority Queue (= min-heap, max-heap), so that the MST whole complexity is changed from O(n^2) to O(NlogN).

7. Shortest path

  • Note: Path is different from Tree. MST is to find a tree with minimum cost. Dijkstra aims to find a path with minimum cost in a directed graph.
  • Basics
    • Properties: (1) sub-path inside the lightest path is also the lightest. (2) No-cycle
    • Needs available: (1) d[node]: initialized ‘inf’, distance from the start node s. (2) π[node]: predecessor
    • Basic operation: Relaxation (if finding a better path, update d and π)
  • Bellman-ford algorithm
    • If there are negative weights.
    • [Example image] O(n*m) is too big. Thus, Dijkstra does not depend on m!
  • Dijkstra algorithm
    • Assume that there is no negative weight.Checked node is never checked again.
    • In a set of unchecked nodes, a node with minimum d[node] must be the node for the next step. [reference image]
    • [Pseudo code]

8. Dynamic programming

  1. Key
    1. Building recursive equation.[Example image]
    2. Problems can divide into sub-problems. The optimal solution of sub-problem is a part of the Optimal solution to the entire problem.
    3. Memoization (For overlapped sub-problem)
    4. Bottom-up

© All rights reserved By Junha Song.