Bài giảng cho lớp cấu trúc dữ liệu. Bạn nào rảnh đọc thì góp ý giùm, nhất là phần C++.
Bài trước: Cây nhị phân và cây tìm kiếm nhị phân
Bài sau: bảng băm.
In the previous lecture we have discussed binary search trees (BSTs) and several basic results such as the number of BSTs with n
keys and the height of a random BST with n
node. Perhaps the most important result is that in a BST the three basic operations — search, insert, and delete — take O(h)
time where h
is the height of the tree (at the time an operation is performed). Since the average height of a random BST is O(log n)
(with a small variance), these operations are pretty fast on average. However, the runtime in the worst case could be bad.
Exercise: come up with a sequence of
n
inserts such that the tree height is .
In this lecture, we describe several schemes for maintaining BSTs such that the height of the tree is (about) logarithmic in the number of nodes. One of the main ideas is to ensure that for each node in the tree the height of the left subtree and the height of the rightsubtree are roughly equal. For a visual illustration of three of the trees described in this lecture, you can make use of the Java applet on this page.
1. AVL Trees
AVL Trees are probably the earliest balanced BST, proposed by two Russians (Soviet) G. M. AdelsonVelskii and E. M. Landis in 1962. The main idea is intuitive: we call a node balanced if its two subtrees have heights differ by at most 1. A BST is an AVL tree if all nodes are balanced. This property is called the AVL property. There are two things we have to do to turn this idea into a theoretically sound data structure:
 Prove that if a tree is an AVL tree, then its height is
O(log n)
wheren
is the number of keys in the tree.  Show how to do
insert
anddelete
inO(log n)
time while maintaining the AVL property.
1.1. AVL property implies logarithmic height
Theorem: an AVL tree on
n
nodes hasO(log n)
height.
Proof. Let be the maximum height of an AVL tree with nodes. We want to show that where is some constant. It turns out that proving the contrapositive is a little easier. Let be the minimum number of nodes in an AVL tree with height , we will derive a lower bound for in terms of , which will then lead to an upper bound of in terms of .
For simplicity, we will count all of the NULL
pointers as leaves of the tree. Thus, a BST is a full binary tree: each nonNULL node has 2 children. It is not hard to see that and . Now, consider an AVL tree with height with the minimum possible number of nodes . Then, it must be the case that one of the subtrees of the root has height with nodes and the other subtree has height with nodes. Thus, we have the following recurrence:
Define , then we have , , and
Because , , the third and fourth Fibonacci numbers, and beause obeys the same recurrence as the Fibonnaci recurrence, we conclude that . Consequently,
where is the golden ratio. Taking log on both sides of the inequality, we obtain . Since is the minimum possible number of nodes in any AVL tree with height , we conclude that for any AVL tree on nodes with height , we have as desired.
1.2. Maintaining the AVL property
After an insert (using the same algorithm as in a normal BST), some subtree might be 2 taller than its sibling subtree. We thus will have to readjust the tree to rebalance it. One important property to note is that the only nodes which might become unbalanced after a new node v
is inserted are the nodes along the path from v
back up to the root of the tree. Because, those are the nodes whose heights might be affected by the insertion.
To keep track of which nodes are balanced or not, we will add an extra field to each node called the balance
field, which is defined to be the height of the left subtree minus the height of the right subtree of that node.
struct AVLNode { // can also use static const instead of enum enum { LEFT_HEAVY = 1, BALANCED = 0, RIGHT_HEAVY = 1}; int balance; Key key; Value value; AVLNode* left; AVLNode* right; AVLNode* parent; AVLNode(const Key& k, const Value& v) : key(k), value(v), parent(NULL), left(NULL), right(NULL), balance(BALANCED) {} std::string to_string() const { std::ostringstream oss; oss << key << "(" << balance << ")"; return oss.str(); } };
We want a node’s balance to be 1, 0, or 1. After inserting a node v
into the tree, let a
be the first node on the path from v
back to the root that is unbalanced. Note that a
‘s balance factor has to be 2 or 2. Let b
be the ancestor of v
which is the child of a
. Note that v
‘s parent cannot be unbalanced after v
‘s insert, and thus a
has to be at the very least v
‘s grandparent. Node b
might be v
‘s parent.
We look for a
by moving up the tree from v
, readjusting the nodes’ balance fields along the way. If we don’t find a
then we are done, no rebalancing needed. If we do find a
, then there are four cases to consider: the rightright case, the rightleft case, the leftleft case, and the leftright case.
Below is the picture for the rightright case, which should be selfexplanatory:
We readjust the tree by doing a left rotation about node a
as shown in the picture. For the rightleft case, we perform a double rotation: imagine doing a rightrotate about node c
first to bring b
up, and then a leftrotate about node a
to bring b
one more step up. Note that the new node can be in either T2 or T3 in the following picture.
The leftleft and the leftright cases are symmetric to the above two cases. A very important point to notice is that after doing the rotation (double or single), we no longer have to fix the balance fields of nodes further up the tree!
Exercise: argue why after a single or double rotation, all nodes higher up in the tree are balanced.
Here is a simple interface for AVL tree:
/** * ***************************************************************************** * AVLTree.h: a simple implementation of the AVL Tree * Author: Hung Q. Ngo * ***************************************************************************** */ #ifndef AVLTREE_H_ #define AVLTREE_H_ #include <iostream> #include <sstream> template <typename Key, typename Value> class AVLTree { public: AVLTree() : root(NULL) { } void inorder_print() { inorder_print(root); std::cout << std::endl; } void preorder_print() { preorder_print(root); std::cout << std::endl; } void postorder_print() { postorder_print(root); std::cout << std::endl; } bool insert(Key, Value); bool find(Key key) { return search(root, key) != NULL; } const Value& get(Key); const Value& minimum(); const Value& maximum(); bool remove(Key); void clear() { clear(root); } private: // The node is similar to a BSTNode; use parent pointer to simplify codes // A tree is simply a pointer to a BSTNode, we will assume that variables of // type Key are comparable using <, <=, ==, >=, and > // we do not allow default keys and values struct AVLNode { enum { LEFT_HEAVY = 1, BALANCED = 0, RIGHT_HEAVY = 1}; int balance; Key key; Value value; AVLNode* left; AVLNode* right; AVLNode* parent; AVLNode(const Key& k, const Value& v) : key(k), value(v), parent(NULL), left(NULL), right(NULL), balance(BALANCED) {} std::string to_string() const { std::ostringstream oss; oss << key << "(" << balance << ")"; return oss.str(); } }; AVLNode* root; AVLNode* successor(AVLNode* node); AVLNode* predecessor(AVLNode* node); AVLNode* search(AVLNode*, Key); void rebalance(AVLNode*); void right_rotate(AVLNode*&); void left_rotate(AVLNode*&); void inorder_print(AVLNode*); void preorder_print(AVLNode*); void postorder_print(AVLNode*); void clear(AVLNode*&); }; #include "AVLTree.cpp" // only done for template classes #endif
And here is the implementation of the insertion strategy just described.
/** *  * insert a new key, value pair, return * true if the insertion was successful * false if the key is found already *  */ template <typename Key, typename Value> bool AVLTree<Key, Value>::insert(Key key, Value value) { AVLNode* p = NULL; AVLNode* cur = root; while (cur != NULL) { p = cur; if (key < cur>key) cur = cur>left; else if (key > cur>key) cur = cur>right; else return false; // key found, no insertion } // insert new node at a leaf position AVLNode* node = new AVLNode(key, value); node>parent = p; if (p == NULL) // empty tree to start with root = node; else if (node>key < p>key) p>left = node; else p>right = node; // readjust balance of all nodes up to the root if necessary rebalance(node); return true; } /** *  * left rotate around node c * c b * / \ / \ * C b > c B * / \ / \ * A B C A * adjust parent pointers accordingly *  */ template <typename Key, typename Value> void AVLTree<Key, Value>::left_rotate(AVLNode*& node) { if (node == NULL  node>right == NULL) return; AVLNode* c = node; AVLNode* b = c>right; AVLNode* p = c>parent; // first, adjust all parent pointers b>parent = p; c>parent = b; if (b>left != NULL) b>left>parent = c; // make sure c's parent points to b now if (p != NULL) { if (p>right == c) p>right = b; else p>left = b; } // finally, adjust downward pointers c>right = b>left; b>left = c; node = b; // new local root if (root == c) root = b; // new root if necessary } /** *  * right rotate around node c * c b * / \ / \ * b C > A c * / \ / \ * A B B C * adjust parent pointers accordingly *  */ template <typename Key, typename Value> void AVLTree<Key, Value>::right_rotate(AVLNode*& node) { if (node == NULL  node>left == NULL) return; AVLNode* c = node; AVLNode* b = c>left; AVLNode* p = c>parent; // first, adjust all parent pointers b>parent = p; c>parent = b; if (b>right != NULL) b>right>parent = c; // next, make sure c's parent points to b if (p != NULL) { // make sure c's parent points to b now if (p>right == c) p>right = b; else p>left = b; } // finally, adjust the downward pointers c>left = b>right; b>right = c; node = b; // new local root if (root == c) root = b; // new root if necessary } /** *  * node points to the root of a subtree which just had a height increase * we assume the invariance that node's parent is not unbalanced, which * certainly holds for the first node that got inserted * the 'balance' field of the parent may not be correct and we have to * adjust that too *  */ template <typename Key, typename Value> void AVLTree<Key, Value>::rebalance(AVLNode* node) { AVLNode* p = node>parent; if (p == NULL) return; // first, recompute 'balance' of the parent; node got a heigh increase if (p>left == node) p>balance++; else p>balance; // if there's no grandparent or if the parent is balanced then we're done AVLNode* gp = p>parent; // the grand parent if (gp == NULL  p>balance == AVLNode::BALANCED) return; // if we get here then the parent p just got a height increase // next, see if the grand parent is unbalanced if (node == p>left) { if (p == gp>left) { if (gp>balance == AVLNode::LEFT_HEAVY) { // this is the LL case // gp(+2) p (0) // / \ / \ // p(+1) B > node gp (0) // / \ / \ // node A A B p>balance = gp>balance = AVLNode::BALANCED; right_rotate(gp); return; } } else { // p == gp>right if (gp>balance == AVLNode::RIGHT_HEAVY) { // the RL case // this is the RL case // gp(2) node(0) // / \ / \ // A p(+1) > gp(x) p(y) // / \ /\ / \ // node D A B C D // / \ // B C // computing the new balance is a little trickier, depending on // with of B & C is heavier switch (node>balance) { case AVLNode::LEFT_HEAVY: p>balance = AVLNode::RIGHT_HEAVY; gp>balance = AVLNode::BALANCED; break; case AVLNode::BALANCED: // only happens if B & C are NULL p>balance = AVLNode::BALANCED; gp>balance = AVLNode::BALANCED;; break; case AVLNode::RIGHT_HEAVY: p>balance = AVLNode::BALANCED;; gp>balance = AVLNode::LEFT_HEAVY; break; } node>balance = AVLNode::BALANCED; right_rotate(p); left_rotate(gp); return; } } } else { // node == p>right if (p == gp>right) { if (gp>balance == AVLNode::RIGHT_HEAVY) { // this is the RR case // gp(2) p(0) // / \ / \ // A p(1) > gp(0) node // / \ / \ / \ // B node A B p>balance = gp>balance = AVLNode::BALANCED; left_rotate(gp); return; } } else { // p == gp>left if (gp>balance == AVLNode::LEFT_HEAVY) { // this is the LR case // gp(+2) node(0) // / \ / \ // p(1) D > p(x) gp(y) // / \ /\ / \ // A node A B C D // / \ // B C // computing the new balance is a little trickier, depending on // with of B & C is heavier switch (node>balance) { case AVLNode::LEFT_HEAVY: p>balance = AVLNode::BALANCED; gp>balance = AVLNode::RIGHT_HEAVY; break; case AVLNode::BALANCED: // only happens if B & C are NULL p>balance = AVLNode::BALANCED; gp>balance = AVLNode::BALANCED;; break; case AVLNode::RIGHT_HEAVY: p>balance = AVLNode::LEFT_HEAVY; gp>balance = AVLNode::BALANCED;; break; } node>balance = AVLNode::BALANCED; left_rotate(p); right_rotate(gp); return; } } } rebalance(p); }
Insertion as desribed above runs in time O(log n) because it involves one pass down and one pass up the tree’s height. Also, we use the node structure which has a parent pointer to simplify the code. We do not need parent pointers, but without them we will then probably have to use a stack to move up after an insertion.
Removing a node from an AVL tree has two steps:
 The first step is done in the same way as a normal BST. If the node has at most one child then we splice it. Let
v
be its child (or NULL). If the node has two children, we find its successor (which is the minimum node on the right subtree) and splice the successor, put the successor’s payload in the node’s payload. In this case, letv
be the successor’s other child (which could beNULL
).  In the second step, we fix the balances of all nodes from
v
up to the root and perform rotations accordingly. The strategy is exactly the same as that in the insertion case with one key difference: we might have to do more than one single/double rotations, all the way up to the root. In this case, the subtree rooted atv
just lost 1 from its height.Consider, for instance, the case when
v
is the left child of its parent. We will have to rebalance atv
‘s parent if it was already rightleaning. Letu
bev
‘s sibling. Ifu
is left leaning then we do a double rotation. Otherwise, a single rotation at the parent node will suffice. Then, we repeat the strategy if there is an overal height reduction at the parent node.
Exercise: complete the
AVLTREE::remove()
function.
2. RedBlack Trees
Red Black trees were proposed by Rudolf Bayer in 1972, and refined by Leonidas J. Guibas and Robert Sedgewick in 1978. Guibas and Redgewick presented the coloring scheme and the name RB tree sticks. RB trees are found in many practical search structures. C++’s std::map
and std::set
are typically implemented with a redblack tree, so are symbol tables in many operating systems and other languages.
One way to describe the intuition behind a RB tree is as follows. In a perfectly balanced tree all the paths from a node to its leaves have the same length. However, this property is way too strong to be maintainable. Hence, we will design a balanced BST by keeping a “skeleton” of black nodes which have the aforementioned property of a perfectly balanced tree. At the same time, we cut the tree some slack by allowing some other nodes to be red, positioned at various placed in the tree so that the constraint is looser. However, we cannot give the red nodes too much freedom because they will destroy the balance maintained by the black skeleton. Thus, we will enforce the property that red nodes can not have red children. This way, red nodes are positioned scatteredly throughout the tree giving the “right” amount of slack while keeping the tree mostly balanced.
Rigorously, a redblack tree is a BST with the following properties:
 (Bicolor property) All nodes are colored red or black
 (Black root and leaves) The root and the leaves (the NULL nodes) are colored black
 (Black height property) For every internal node
v
, the number of black nodes we encounter on any path fromv
down to one of its leaves are the same. This number is called the black height ofv
. Note that the black height of a node does not count the node’s own color.  (Black parent property) A red node must have a black parent. Or, equivalently, every red node must have two black children. This is the property that ensures the sparsity of red nodes.
Here is an example of what a RB tree looks like.
2.1. RB trees have logarithmic heights
Consider any RB tree T
. Suppose we lump together every red node with its parent (which must be black); then, we obtain a tree T'
which is no longer a binary tree. The tree T'
will be a (2,4)tree (or 234 tree), where each internal node has 2, 3, or 4 children. If the RB tree shown above is T
, then its 234 counterpart is
Let h
be the height of T
and h'
be the height of T'
. Also, let n
be the number of keys in T
. First of all, we claim that the number of black leaves (squares in the pictures) is exactly n+1
. This is because the RB tree is a full binary tree and from that we can prove this claim by induction. (Sketch: if the left subtree of the root has a
keys and the right subtree has b
keys, then there are totally a+b+1
keys in the tree and (a+1)+(b+1)
leaves by the induction hypothesis.)
Now, due to the 2 to 4 branching factors, the number of leaves varies between 2^{h'}
and 4^{h'}
: . By the black parent property, the height of T'
is at least half the height of T
. Hence,
2.2. How to maintain the RB properties
After an insert. We will always color the newly inserted node red
, unless it is the root in which case we color it black. Call the newly inserted node z
.
If z
or its parent is black, then there is nothing to do. All properties are still satisfied.
Now, suppose z
‘s parent is red. Then, we have the double red problem. We fix this problem by considering the following cases.
 If
z
‘s uncle is red, then we recolorz
‘s parent and uncle black, grandparent red (it had to be black before), and consider the grandparent the newz
. We potentially have a new double red problem but it is now higher up in the tree. Ifz
‘s grand parent is the root then we color it black and we are done. The following picture illustrates this case.  If
z
‘s uncle is black then there are two subcases which can be resolved with either a single rotation or a double rotation as seen in the following pictures.
After a delete. When we delete a node from a BST, we either splice it or splice its successor. Let z
be the children of the node that got spliced. If we spliced a red node, then we’re done; there is nothing to fix. Suppose we spliced a black node. In this case, we will violate the black height property of all ancestors of z
(except for the trivial case when z
is the root or when z
‘s parent is the root).
Conceptually, if z
was allowed to have “double blackness” then the black height property is not violated. Note that z
‘s sibling cannot be a leaf node; thus, the sibling must have two children. We consider three cases as follows.

z
‘s sibling is black with a red child. In this case, we do a single or double rotation and color that child black. In essence we give that red child one ofz
‘s “blackness”. The fact thatz
is double black also indicates that the subtree rooted atz
is a little short on height; and, the fact that the sibling’s side has a red node means that side has extra height to spare. Overall, a rotation towardz
‘s side makes sense. The following pictures illustrate the two subcases: 
z
‘s sibling is black with both black children and the parent is red. In this case, we color the sibling red, parent black, and we are done: 
z
‘s sibling is black with both black children and the parent is black. In this case, we color the sibling red, parent double black, and move the double black problem up the tree.  Finally, if
z
‘s sibling is red we perform a single rotation, give the sibling the parent’s color, and the parent red color. Asz
‘s new sibling is black, we are back to one of the previous cases.
3. (2,4)Trees
A (2,4)tree, also called a 234 tree was invented by Rodolf Bayer in 1972. It is a special case of Btrees and multiway search trees. So let’s talk about a multiway search tree first.
In a multiway search tree, nodes no longer are restricted to having 2 children and holding one key. Nodes can hold multiple keys. A dnode is a node that holds d1
keys and have d
children. Let K_{1} ≤ K_{2} ≤ ... ≤ K_{d1}
be the keys stored at this node. And, let v_{1}
, v_{2}
… v_{d}
be the d
children pointers. Then, in a multiway search tree, for every dnode
, it must hold that the keys of the v_{i}
subtree is in between K_{i1}
and K_{i}
as shown in the following picture. (We implicitly understand that K_{0}
is minus infinity and K_{d}
is plus infinity.)
Here’s an example of a multiway search tree.
Searching in a multiway search tree is straightforward: to search for key K
, we find the index i
such that K_{i1} < K < K_{i}
and follow link v_{i}
.
Exercise: Similarly, you should think of the algorithms for finding the maximum, minimum, successor, and predecessor.
Exercise: Does a pre/in/postorder traversal of a multiway search tree make sense? How about a levelorder traversal? How do you write codes for a levelorder traversal of a multiway search tree?
A (2,4)tree is a multiway search tree with two additional properties:
 Size property: all internal nodes in the tree are 2, 3, or 4nodes. In particular, every internal node holds 1, 2, or 3 keys.
 Depth property: every leaf of a (2,4)tree are at the same depth, called the depth of the tree.
Here is a sample (2,4)tree:
3.1. A (2,4)tree has logarithmic height
First, we show that the number of leaves of a (2,4)tree is precisely n+1
where n
is the number of keys stored in the tree. This holds for any “branching trees”, which are trees in which every internal node has at least 2 children. So, we prove this fact for branching trees. The proof is by induction on the height of the tree.
If the tree has height 0 then the root is a leaf which stores no key. If the tree has height 1 then this is certainly the case because when the root is a dnode
it has d
children (which are leaves) and stores d1
keys. If the tree has height at least 2, again suppose the root is a dnode
. Let be the number of keys stored in the d
subtrees of the root. Since all subtrees of the root are also branching trees, by the induction hypothesis the total number of leaves is where is the total number of keys in the entire tree. Here, we used the fact that the root stores keys.
Next, let be the height of a (2,4)tree, by the same reasoning as in the RB tree case we have which implies .
3.2. How to maintain the (2,4)tree properties
We will assume that we do not store duplicate keys.
After an insert. We insert a new key into a (2,4)tree in the following way. We search for it in the tree. If the new key is found then we do nothing because duplicate keys are not allowed. If the new key is not found, then we will end up at a leaf node. Let v
be the parent of that leaf node. We insert the key into the correct relative position in that node v
.
Prior to the insertion of the new key, if v
was a 2node or a 3node, then we are OK because after the insertion of a new key v
is at worst a 4node. However, if v
was already a 4node then we have the overflow problem we need to resolve. The solution is intuitive: we split the node into two, bring a middle key up one level as shown in the following picture:
In the worstcase, the updates are propagated up to the root and we have an increase in tree height
The run time is proportional to the height of the tree which is logarithmic, and both the size property and the depth property are still satisfied.
After a delete. To delete a key from a (2,4)tree, we use the strategy similar to that of the BST case. If the key is at the lowest level, we simply remove it. If the key is at some node higher up, we find the successor key which is the left most key on the right subtree (of the key to be removed), replace the key with its successor, and remove the successor.
The problem occurs when we try to remove a 2node (which is at the lowest level, next to the leaves): the problem is called an underflow problem. To solve the underflow problem, we consider the following cases.
 If the node to be removed has an adjacent sibling which is a 3node or a 4node, we perform a “transfer” as shown in the following picture, which hopefuly is selfexplanatory:
 If the node to be removed has only one adjacent sibling which is a 2node, or has 2 adjacent siblings both of which are 2nodes, then we perform a fusion as shown in the following picture:
The fusion might cause an underflow at the parent node, in which case we repeat the process:
The fusion may propagate all the way up to the root, in which case we have a height reduction of the tree.
4. Splay trees
Splay trees are a selfadjusting binary search trees invented by Dan Sleator and Bob Tarjan in 1985. It does not require any additional field in each node such as color or balance. All operations have amortized cost O(log n)
. In addition, frequently accessed nodes are closer to the root.
In a splay tree, insertion, deletion, and search are done exactly in the same way as in a normal BST, with one additional splay(c
) step, where c
is a node to be defined later.
4.1. Splaying
To “splay” a node c
, we perform O(height of tree)
baby steps. Each of these baby steps is either a zigzig
, zigzag
, or zig
defined as follows.
zigzig
is performed whenc
has a grandparent and its side (left or right) with respect to its parent is the same as its parent’s side with respect to its grandparent.zigzag
is performed whenc
has a grandparent and it is on a different side of its parent than its parent is with respect to the grandparent.zig
is performed whenc
‘s parent is the root.
To splay a node c
, we repeatedly perform the baby steps until the node is floated up all the way to the root. The cost of splaying is proportional to the height of the splayed node.
4.2. Which node to splay and why it works
After a search, we splay the node whose key is found; or, if the key is not found then we splay the last nonNULL node seen in the search.
After an insert, we splay the newly inserted node.
After a delete, we splay the child of the spliced node.
The main question is, why does this work, even just on average? It is conceivable that the tree will become so unbalanced that its height is and thus a search might take linear time. How can we then say that on average an operation takes logarithmic time? Let’s actually construct a sequence of insertions and such that a later search takes linear time. Suppose we insert the keys 1, 2, …, n in that order. After each insertion, the key ends up on the right node of the root, and a zig operation brings it up top.
Insert 1: 1 Insert 2: 1 2 \  zig > / 2 1 Insert 3: 2 3 / \  zig > / 1 3 2 / 1 and so on
Now, after n
insertions as described, when we search for key 1
, we will have to take about n
steps to get down to the leftmost branch, and then about n/2
splaying baby steps to bring 1
up all the way to the root. Overall, we spent cn
units of time where c
is some constant. What is going for us is that the price we paid for all of the previous insertions were also some constants. And thus, when we distribute the total cost over n+1
operations we will still have a small cost per operation.
For example, if each of the inserts took d
units of times, then the total price we pay per operation is which is a constant! We will transfer this intuition into an (amortized) analysis by giving each operation a budget of “dollars.” Then, we show that the money left in the accounts of all the nodes plus the new dollars are always sufficient to pay for any of the operations. If an operation costs less money, we will deposit the residual amount to the accounts to pay for future and potentially more expensive operations.
4.3. Analysis
Define the “size” of a node v
, denoted by n(v)
, in a splay tree to be the number of keys in its subtree. And, call the “rank” of a node, denoted by r(v)
, the value log_{2}(2n(v)+1)
. Note that the ranks and the sizes will change over time.
Let P be an operation to be performed: search, insert, or delete. Let cost(P)
denote the cost of performing P. Our strategy is to make a “deposit” D(P)
(in “miliseconds”, or “dollars”) for operation P in such away that the deposit plus what ever amount of “money” we still have in the tree is sufficient to pay for the cost of the operation. Our ultimate objective is to prove that
If each operation P has a deposit of
D(P) = c*log(n)
dollars, then the splay tree “bank” will always have sufficient fund to pay for all operations. In particular, the amortized cost of search, insert, and delete in a splay tree isO(log n)
The splay tree bank is organized as follows. The bank maintains a strict accounting policy:
The invariant: Every node
v
in the tree will have an account that has dollars in it.
We will make sure that each operation P on the tree deposits an amount D(P)
sufficient for the invariant to hold. The invariant certainly holds when the tree is empty, i.e. when it has only 1 NULL node whose rank r(v) = 0
.
Let r(T)
denote the total amount of money the bank T
(i.e. a splay tree) possesses, called the banking reserve:
In order to maintain the invariant after an operation P is performed on the tree, we must make sure that the deposit D(P)
is sufficiently large so that
where T'
is the tree resulted from performing operation P
on T
. In other words, we want the variation in bank reserves to be small relative to the newly deposited amount:
Thus, in order to determine how much D(P)
should be, we need to determine the variation after each pair of operations: P = insert + splay, P = delete + splay, P = search + splay. For example, an insert changes the tree from T
to T_{in}
, and then the splaying changes the tree from T_{in}
to T'
. And, because
we can assess the variations separately instead of assessing the variations for each pair insert + splay, delete + splay, and search + splay.
In what follows, let T'
be the tree after an operation was performed (search, insert, delete, splay, or baby step), and T
be the tree before the operation.
After a delete: r(T')
can only be smaller than r(T)
because the ranks of all nodes from the deleted node up to the root are reduced. Hence, in this case r(T')  r(T) ≤ 0
.
After a search: r(T')r(T)=0
.
After an insert: suppose we just inserted node at depth , and is the path from up to the root of the tree. Then, the difference in bank reserve amounts is
where is the rank of node after the insertion. Also, let be the size of node after the insertion. Then, , , and so on, up to . Thus, for every . Consequently,
After a splay: since a splay operation consists of many baby steps, let us estimate the banking reserve difference after each baby step. Please refer to the pictures above in the following derivation. We will use the fact that, for any two real numbers ,
Because, the inequality is equivalent to which is equivalent to .
 zigzig: after zigziging node
c
up two levels, from inequality (*) we can see that . The banking reserve difference can then be bounded by:The last inequality follows from the fact that and .
 zigzag: after zigzaging node
c
, from inequality (*) we know . Thus, the difference can be bounded by  zig: in this case,
In conclusion, after a zigzig or zigzag, the banking reserve difference is at most , and after a zig the difference is at most . Now, when we splay node at depth all the way to the root, there will be baby steps all of which are zigzig or zigzag except for possibly the last step.
Let be the rank of before any baby step is done, be the rank of after the th baby step for . Then, the net banking reserve difference is at most
Now, recall the inequality
that we wanted to maintain after each pair search + splay, insert + splay, and delete + splay. We need to determine the deposit D(P)
to be made for operation P
.
Consider first the pair P = search + splay
. Let d
be the depth of the splayed node. Then, the cost of this operation is proportional to d
; and, by a change of currency we can assume that it costs exactly d
dollars. We have shown that the net banking reserve difference after search + splay
is bounded by
Hence, if we deposit an amount we would have enough to cover the cost plus the extra banking reserve needed. Note that .
The analysis for the pair delete + splay
is similar.
For insert + splay
, the net difference is
and thus a deposit is also sufficient.
5. Experimental performance analysis of balanced search trees
We have analyzed some of the most important balanced BSTs. They perform well theoretically by ensuring that each operation takes logarithmic time (worstcase or on average). With theoretically identical performance like this, we do not have a good basis for picking which data structure to use in a realworld problems. In such case, the only choice is to implement them all and see which one fits best for the problem at hand.
Fortunately, Ben Pfaff has done an set of experiments like that. Here’s the report. There are many interesting conclusions drawn from the experiments; the following is probably more relevant to this course:
We found that in selecting data structures, unbalanced BSTs are best when randomly ordered input can be relied upon; if random ordering is the norm but occasional runs of sorted order are expected, then redblack trees should be chosen. On the other hand, if insertions often occur in a sorted order, AVL trees excel when later accesses tend to be random, and splay trees perform best when later accesses are sequential or clustered.
For node representation, we found that parent pointers are generally fastest, so they should be preferred as long as the cost of an additional pointer ﬁeld per node is not important. If space is at a premium, threaded representations conserve memory and lag only slightly behind parent pointers in speed.