The tree with no nodes is called the null or empty tree.
A tree that is not empty consists of a root node and potentially many levels of additional nodes that form a hierarchy.
Tree: undirected, connected graph with no cycles
Examples:
 The file system on the computer where we want to sort files by folder structure or size in the same folder.
 Manipulate hierarchical data.
 Make information easy to search
 Manipulate sorted lists of data.
 Multistage decisionmaking
 Authorization (e.g. Admin> All> RW> R)
 Javascript document object model

Time Complexity 
Space Complexity 

Average 
Worst 
Worst 

Data Structure 
Access 
Search 
Insertion 
Deletion 
Access 
Search 
Insertion 
Deletion 

Binary Search Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Cartesian Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(n)  O(n)  O(n)  O(n) 
BTree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
RedBlack Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
Splay Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(log(n))  O(log(n))  O(log(n))  O(n) 
AVL Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
KD Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Binary Tree
A binary tree is a tree data structure in which each node has at most two children, which are referred to as the left child and the right child.
https://upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Binary_tree.svg/300pxBinary_tree.svg.png
It is implemented mainly using Links.
Binary Tree Representation:
A tree is represented by a pointer to the topmost node in the tree. If the tree is empty, then the value of root is NULL. A Binary Tree node contains following parts.
 Data
 Pointer to left child
 Pointer to right child
A Binary Tree can be traversed in two ways:

Depth First Traversal: It is stack implementation. A Leftfirst walk around a tree starts at the root and walks alongside the edges keeping the edges always on the left, until returning to the root after visiting all the edges.
 Inorder Traversal (LeftRootRight)  Record each leaf as soon as you see it and each internal vertex the second time you see it.
 Push root onto stack
 While stack is not empty
 v := top(stack)
 While v has a leftchild
 Push leftchild(v) onto stack
 v:= leftchild(v)
 v := pop(stack)
 List v
 If v has a rightchild
 Push rightchild(v) onto stack
 v := rightchild(v)
 Else
 While stack not empty and v has no rightchild
 v := pop(stack)
 List v
 If v has a rightchild
 Push rightchild(v) onto stack
 v := rightchild(v)
 While stack not empty and v has no rightchild
 Preorder Traversal (RootLeftRight)  Record each vertex the first time it is seen (and not again).
 Push root onto stack.
 While stack is not empty:
 Pop a vertex off stack, and write it on output list.
 Push its children, right to left onto stack.
 Postorder Traversal (LeftRightRoot)  Record each vertex the last time it is seen (and not before).
 Push root onto stack
 While stack is not empty
 If top(stack) is unmarked, mark it and push its children right to left onto stack,
 Else, pop stack to output list.
 Inorder Traversal (LeftRootRight)  Record each leaf as soon as you see it and each internal vertex the second time you see it.

Breadth First Traversal: It is Queue implementation.
 Level Order Traversal  The levelorder of an ordered tree is a listing of the vertices in increasing order of depth, such that the vertices of equal depth are listed according to their prescribed order.
 Enqueue root.
 While queue is not empty:
 Dequeue a vertex and write it on the output list.
 Enqueue its children left to right.
 Level Order Traversal  The levelorder of an ordered tree is a listing of the vertices in increasing order of depth, such that the vertices of equal depth are listed according to their prescribed order.
Binary Tree Properties:
 The maximum number of nodes at level ‘l’ = 2^{l1}.
 A maximum number of nodes with height 'h'= 2^{h }– 1
 Height is considered as is the maximum number of nodes on the root to leaf path
 Minimum possible height = ceil(Log2(n+1))
 A number of leaf nodes are always one more than nodes with two children.
Examples:
 Decision tree with Yes/No action

Time Complexity 
Space Complexity 

Average 
Worst 
Worst 

Data Structure 
Access 
Search 
Insertion 
Deletion 
Access 
Search 
Insertion 
Deletion 

Binary Search Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Cartesian Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(n)  O(n)  O(n)  O(n) 
BTree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
RedBlack Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
Splay Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(log(n))  O(log(n))  O(log(n))  O(n) 
AVL Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
KD Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Binary Search Tree
BST (Binary Search Tree) is ordered Binary Tree.
In Binary Search Tree is a Binary Tree with following additional properties:
 The left subtree of a node contains only nodes with keys less than the node’s key.
 The right subtree of a node contains only nodes with keys greater than the node’s key.
 The left and right subtree each must also be a binary search tree.
https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Binary_search_tree.svg/300pxBinary_search_tree.svg.png
Each of whose vertices is assigned a key, such that the key assigned to any vertex v is greater than the key at each vertex of the left subtree at v and less than the key at any vertex of the right subtree of v.
Time Complexities:
 Search: O(h)
 Insertion: O(h)
 Deletion: O(h)
 Extra Space: O(n) for pointers
 h: Height of BST
 n: Number of nodes in BST
If Binary Search Tree is Height Balanced, then h = O(Log n)
SelfBalancing BSTs such as AVL Tree, RedBlack Tree, and Splay Tree make sure that height of BST remains O(Log n).
BST provide moderate access/search (quicker than Linked List and slower than arrays).
BST provide moderate insertion/deletion (quicker than Arrays and slower than Linked Lists).
Example:
 Product sorting in ecommerce site
 Groping or sorting video files on Youtube, Netflix

Time Complexity 
Space Complexity 

Average 
Worst 
Worst 

Data Structure 
Access 
Search 
Insertion 
Deletion 
Access 
Search 
Insertion 
Deletion 

Binary Search Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Complete Binary Tree
A binary tree is a call as Complete Binary Tree if all levels are completely filled except possibly the last level and the last level has all keys as left as possible.
http://web.cecs.pdx.edu/~sheard/course/Cs163/Graphics/CompleteBinary.jpg
Example:

Time Complexity 
Space Complexity 

Average 
Worst 
Worst 

Data Structure 
Access 
Search 
Insertion 
Deletion 
Access 
Search 
Insertion 
Deletion 

Binary Search Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Cartesian Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(n)  O(n)  O(n)  O(n) 
BTree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
RedBlack Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
Splay Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(log(n))  O(log(n))  O(log(n))  O(n) 
AVL Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
KD Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Heap (Binary Heap)
A Binary Heap is a Binary Tree with following properties.
 It’s a Complete tree (All levels are completely filled except possibly the last level and the last level has all keys as left as possible).
 This property of Binary Heap makes them suitable to be stored in an array.
 A Binary Heap is either Min Heap or Max Heap.
 Min Binary Heap 
 The key at root/parent must be minimum among all keys present in Binary Heap. The same property must be recursively true for all nodes in Binary Tree.
 Max Binary Heap 
 It is similar to Min Heap, where root/parent must be maximum accross all level
http://web.cecs.pdx.edu/~sheard/course/Cs163/Graphics/treeAsArray.png
Heap is a balanced binary tree.
The Order property:
For every node n, the value in n is greater (Max Heap) / lesser (Min Heap) than or equal to the values in its children (and thus is also greater/lesser than or equal to all of the values in its subtrees).
The Shape property:
 All leaves are either at depth d or d1 (for some value d).
 All of the leaves at depth d1 are to the right of the leaves at depth d.
 There is at most 1 node with just 1 child.
 That child is the left child of its parent, and it is the rightmost leaf at depth d.
Complexity:
 Get Minimum in Min Heap or Get Maximum in Max Heap: O(1)
 Extract Minimum Min Heap or Extract Maximum in Max Heap: O(Log n)
 Decrease Key in Min Heap or Increase Key in Max Heap: O(Log n)
 Insert: O(Log n)
 Delete: O(Log n)
Example:
 Priority Queues are also used in Dijkstra's shortest path algorithm and Prim’s Minimum Spanning Tree algorithms.
 Priority queues can be efficiently implemented using Binary Heap because it supports insert(), delete() and extractmax(), decreaseKey() operations in O(logn) time
 Scheduling processes in operating systems
 Heap Sort uses Binary Heap to sort an array in O(nLogn) time
 Graph Algorithms
 Dijkstra’s Shortest Path
 Prim’s Minimum Spanning Tree.
 Useful to solve
 K’th Largest Element in an array.
 Sort an almost sorted array
 Merge K Sorted Arrays.
The Heap data structure can be used to efficiently find the k’th smallest (or largest) element in an array.
Heap is a special data structure and it cannot be used for searching of a particular element.
Operations on Min/Max Heap:

getMini()/getMax():

It returns the root element of Min/Max Heap. Time Complexity of this operation is O(1).


extractMin()/extractMax():

Removes the minimum/maximum element from Min/Max Heap. Time Complexity of this Operation is O(Logn) as this operation needs to maintain the heap property (by calling heapify()) after removing root.


decreaseKey()/increaseKey():

Decreases/Increases value of the key. The time complexity of this operation is O(Logn). If the decreases/increases a key value of a node is greater/lesser than a parent of the node, then we don’t need to do anything. Otherwise, we need to traverse up to fix the violated heap property.


insert():

Inserting a new key takes O(Logn) time. We add a new key at the end of the tree. If new key is greater/lesser than its parent, then we don’t need to do anything. Otherwise, we need to traverse up to fix the violated heap property.


delete():

Deleting a key also takes O(Logn) time. We replace the key to be deleted with minimum/maximum infinite by calling decreaseKey()/increaseKey(). After decreaseKey()/increaseKey(), the minus/max.


Time Complexity 
Space Complexity 

Average 
Worst 
Worst 

Data Structure 
Access 
Search 
Insertion 
Deletion 
Access 
Search 
Insertion 
Deletion 

Binary Heap 
Θ(1)  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(1)  O(log(n))  O(log(n))  O(log(n))  O(n) 
Trie (aka Radix or Prefix tree)
Trie (Prefix tree/Radix tree) is an efficient data structure for searching words in dictionaries, search complexity with Trie is linear in terms of word (or key) length to be searched.
Trie is an ordered tree data structure that is used to store a dynamic set or associative array where the keys are usually strings, node position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string.
ref: https://en.wikipedia.org/wiki/Trie
Comparison:
If we store keys in the binary search tree, a wellbalanced BST will need time proportional to M * log N, where M is maximum string length and N is a number of keys in the tree. Using Trie, we can search the key in O(M) time. So it is much faster than BST.
Hashing also provides word search in O(n) time on average. But the advantages of Trie are there are no collisions (like hashing) so worst case time complexity is O(n).
Advantages of Trie:
The most important thing is Prefix Search. With Trie, we can find all words beginning with a prefix (This is not possible with Hashing).
Disadvantages of Trie:
The only problem with Tries is they require a lot of extra space.
Trie is also known as Radix tree or Prefix tree.
Time Complexity:
 Insert time: O(M) where M is the length of the string.
 Search time: O(M) where M is the length of the string.
 Space: O(ALPHABET_SIZE * M * N) where N is a number of keys in the trie, ALPHABET_SIZE is 26 if we are only considering upper case Latin characters.
 Deletion time: O(M)
Example:
 spell checking
 prefix checking

Time Complexity 
Space Complexity 

Average 
Worst 
Worst 

Data Structure 
Access 
Search 
Insertion 
Deletion 
Access 
Search 
Insertion 
Deletion 

Binary Search Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Cartesian Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(n)  O(n)  O(n)  O(n) 
BTree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
RedBlack Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
Splay Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(log(n))  O(log(n))  O(log(n))  O(n) 
AVL Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
KD Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Suffix Tree
Suffix Tree is mainly used to search a pattern in a text.
The idea is to preprocess the text so that search operation can be done in time linear in terms of pattern length. The pattern searching algorithms like KMP, Z, etc take time proportional to text length.
https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Suffix_tree_BANANA.svg/250pxSuffix_tree_BANANA.svg.png
Suffix Tree is compressed trie/ radix tree of all suffixes, so following are very abstract steps to build a suffix tree from given text.
 Generate all suffixes of given text.
 Consider all suffixes as individual words and build a compressed trie.
Example:
 Find occurrences of pattern
 Search a pattern in a text
Suffix Tree may not be a good idea when text changes frequently like a text editor, etc.

Time Complexity 
Space Complexity 

Average 
Worst 
Worst 

Data Structure 
Access 
Search 
Insertion 
Deletion 
Access 
Search 
Insertion 
Deletion 

Binary Search Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Cartesian Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(n)  O(n)  O(n)  O(n) 
BTree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
RedBlack Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
Splay Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(log(n))  O(log(n))  O(log(n))  O(n) 
AVL Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
KD Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Segment Tree  NonLinear Data Structures
Segment Tree data structure is usually implemented when there are a lot of queries on a set of values. These queries involve minimum, maximum, sum, .. etc on an input range of given set. Queries also involve updating of values in given set.
Segment Trees are implemented using the array.
Time Complexity:
 Construction of segment tree: O(N)
 Query: O(log N)
 Update: O(log N)
 Space: O(N)
Example:
 Find Maximum/Minumum/Sum/Product of numbers in a range

Time Complexity 
Space Complexity 

Average 
Worst 
Worst 

Data Structure 
Access 
Search 
Insertion 
Deletion 
Access 
Search 
Insertion 
Deletion 

Binary Search Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Cartesian Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(n)  O(n)  O(n)  O(n) 
BTree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
RedBlack Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
Splay Tree 
N/A  Θ(log(n))  Θ(log(n))  Θ(log(n))  N/A  O(log(n))  O(log(n))  O(log(n))  O(n) 
AVL Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(log(n))  O(log(n))  O(log(n))  O(log(n))  O(n) 
KD Tree 
Θ(log(n))  Θ(log(n))  Θ(log(n))  Θ(log(n))  O(n)  O(n)  O(n)  O(n)  O(n) 
Ternary Search Tree
Ternary search tree is a type of trie (sometimes called a prefix tree), unlike trie(standard) data structure where each node contains max 26 pointers for its children, each node in a ternary search tree contains only 3 pointers.
Pointers in Ternary Tree:
 The left pointer points to the node whose value is less than the value of the current node.
 The equal pointer points to the node whose value is equal to/close to the value of the current node.
 The right pointer points to the node whose value is greater than the value of the current node.
ref:http://d1hyf4ir1gqw6c.cloudfront.net//wpcontent/uploads/TernarySearchTree.png
Complexity:

Time Complexity
 Little slow than prefix tree.

Space Complexity
 Ternary search trees are more space efficient compared to standard prefix trees.