0% found this document useful (0 votes)
103 views100 pages

Python Basics: Lists, Tuples, and Operators

Uploaded by

sahanayuga1923
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views100 pages

Python Basics: Lists, Tuples, and Operators

Uploaded by

sahanayuga1923
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

PYTHON NOTES

Unit – 1
PART A (2 Marks) - Short Answer Questions
Q1. What is the difference between a List and a Tuple in Python?

List: Lists are mutable (can be changed after creation), defined using square brackets [], and are generally
slower than tuples. Example: my_list = [1, 2, 3]

Tuple: Tuples are immutable (cannot be changed), defined using parentheses (), and are faster than lists.
Example: my_tuple = (1, 2, 3)

Q2. Define Mutable and Immutable data types with examples.

Mutable: Objects whose value can be changed after they are created.

Examples: Lists, Dictionaries, Sets.

Immutable: Objects whose value cannot be changed once created.

Examples: Integers, Floats, Strings, Tuples.

Q3. What is string slicing? Give an example.

Slicing is a technique to extract a substring from a string (or sublist from a list) using indices. The syntax is
string[start:stop:step].

Example:

Python

s = "PYTHON"

print(s[0:2]) # Output: PY

Q4. What is a Dictionary in Python? How is it different from a list?

A dictionary is an unordered collection of data values used to store data values like a map. Unlike lists that
hold only a single value as an element, dictionaries hold a key:value pair.

Syntax: my_dict = {'name': 'John', 'age': 25}

Q5. Explain the significance of the __init__ method.

The __init__ method is a special method (constructor) in Python classes. It is automatically called when an
object of a class is created. It is used to initialize the state (attributes) of the object.

Q6. What is the difference between break and continue statements?

Break: Terminates the loop immediately and transfers execution to the statement following the loop.

Continue: Skips the rest of the code inside the current loop iteration and jumps to the next iteration of the
loop.
Q7. What is Data Abstraction and Encapsulation?

Encapsulation: Wrapping up data (variables) and methods (functions) together into a single unit (Class).

Abstraction: Hiding the implementation details and showing only the essential features of the object to the
user.

Q8. What are Python Namespaces?

A namespace is a system to ensure that all the names in a program are unique and can be used without any
conflict. It is essentially a mapping of names to objects (e.g., Local, Global, and Built-in namespaces).

PART B (13 Marks) - Descriptive Questions

1. Explain the different types of Operators in Python with suitable examples.

Answer:

1. Introduction

Operators are special symbols in Python that carry out arithmetic or logical computation. The value that the
operator operates on is called the operand. For example, in a + b, + is the operator and a, b are operands.

2. Classification of Python Operators

Python supports the following types of operators:

1. Arithmetic Operators
2. Comparison (Relational) Operators
3. Logical Operators
4. Assignment Operators
5. Membership Operators
6. Identity Operators
7. Bitwise Operators

1. Arithmetic Operators

These are used to perform mathematical operations like addition, subtraction, multiplication, etc.

Operator Name Description Example (x=10, y=3)

+ Addition Adds two operands x + y → 13

- Subtraction Subtracts right from left operand x-y→7

* Multiplication Multiplies two operands x * y → 30

/ Division Divides left by right (result is float) x / y → 3.33

% Modulus Returns remainder of division x%y→1


Operator Name Description Example (x=10, y=3)

** Exponentiation Left operand raised to power of right x ** y → 1000

// Floor Division Division resulting in whole number x // y → 3

Example Code:

Python
a = 10
b=3
print("Addition:", a + b)
print("Floor Division:", a // b) # Output: 3
print("Exponent:", a ** b) # Output: 1000

2. Comparison (Relational) Operators

These operators compare the values on either side and decide the relation between them. They always return a
Boolean value (True or False).

Operator Description Example (x=10, y=20)

== Equal to x == y → False

!= Not equal to x != y → True

> Greater than x > y → False

< Less than x < y → True

>= Greater than or equal to x >= y → False

<= Less than or equal to x <= y → True

Example Code:

Python
x = 10
y = 20
print(x > y) # Output: False
print(x != y) # Output: True

3. Logical Operators

These are used to combine conditional statements.


Operator Description Example

and Returns True if both statements are true x < 5 and x < 10

or Returns True if one of the statements is true x < 5 or x < 4

not Reverse the result, returns False if the result is true not(x < 5 and x < 10)

Example Code:

Python
a = True
b = False
print(a and b) # Output: False
print(a or b) # Output: True

4. Assignment Operators12

Used to assign values to variables.34

Operator Example Equivalent to

=1112 x = 51314 x = 51516

+=1718 x += 31920 x = x + 32122

-=2324 x -= 32526 x = x - 32728

*=2930 x *= 33132 x = x * 33334

5. Membership Operators3536

These operators test 37if a sequence is presented in an object (like strings, lists, or tuples).

 in: Returns True if a sequence with the specified value is present in the object.
 not in: Returns True if a sequence with the specified value is not present in the object.

Example Code:

Python
fruits = ["apple", "banana"]
print("banana" in fruits) # Output: True
print("orange" not in fruits) # Output: True

6. Identity Operators
These operators compare the memory locations of two objects, not just if they are equal.

 is: Returns True if both variables point to the same object in memory.
 is not: Returns True if both variables point to different objects.

Example Code:

Python
x = ["apple", "banana"]
y = ["apple", "banana"]
z=x

print(x is z) # Output: True (same object in memory)


print(x is y) # Output: False (same content, but different objects)

7. Bitwise Operators (Optional but good for high marks)

Operate on bits and perform bit-by-bit operations.

 & (AND), | (OR), ^ (XOR), ~ (NOT), << (Left Shift), >> (Right Shift).

Conclusion:

Python provides a rich set of built-in operators that form the foundation of logic building in programming,
ranging from basic arithmetic to complex logical and object identity checks.

2. Discuss Python Control Flow statements (Conditionals and Loops) with flowcharts and syntax.

Answer:

1. Introduction to Control Flow Control flow refers to the order in which individual statements, instructions,
or function calls are executed or evaluated in a program. By default, Python executes code sequentially (line by
line). However, we often need to alter this flow based on decisions or repetitions.

Control flow is categorized into two main types:

1. Conditional (Selection) Statements (if, if-else, elif)


2. Iterative (Looping) Statements (for, while)

Section 1: Conditional Statements (Decision Making)

These statements allow the program to execute a block of code only if a specific condition is true.

A. The if Statement

This is the simplest form of decision-making. It executes a block of code only if the condition is True.

 Syntax:

Python

if condition:
# statement(s) to execute if condition is true

 Example:
Python

age = 18
if age >= 18:
print("You are eligible to vote.")
B. The if...else Statement

This handles two possibilities. If the condition is True, the if block executes; otherwise, the else block executes.

 Syntax:

Python

if condition:
# Executes if True
else:
# Executes if False

 Example:

Python

num = 10
if num % 2 == 0:
print("Even Number")
else:
print("Odd Number")
C. The if...elif...else Ladder

Used when checking multiple conditions sequentially. As soon as one condition is true, its block is executed,
and the rest are skipped.

 Syntax:

Python

if condition1:
# block 1
elif condition2:
# block 2
else:
# block 3 (default)

 Example:

Python

marks = 75
if marks >= 90:
print("Grade A")
elif marks >= 70:
print("Grade B")
else:
print("Grade C")

Section 2: Looping Statements (Iteration)

Loops allow us to execute a block of code repeatedly until a certain condition is met.
A. The while Loop

A while loop repeatedly executes a target statement as long as a given condition remains true. It is known as an
entry-controlled loop.

 Syntax:

Python

while condition:
# statements
# update iterator

 Example: Printing numbers 1 to 5.

Python

count = 1
while count <= 5:
print(count)
count = count + 1
B. The for Loop

The for loop in Python is used to iterate over a sequence (like a list, tuple, dictionary, or string). It is a definite
loop (we know how many times it will run based on the sequence length).

 Syntax:

Python

for value in sequence:


# statements
 Example: Iterating through a list.

Python

fruits = ["apple", "banana", "cherry"]


for x in fruits:
print(x)

 Using range() function in for loop:

Python

for i in range(5): # Generates 0, 1, 2, 3, 4


print(i)

Section 3: Loop Control Statements

Sometimes we need to change the normal execution of a loop.

1. break: Terminates the loop immediately.


2. continue: Skips the current iteration and jumps to the next one.
3. pass: Does nothing (placeholder for future code).

Example of break:

Python

for i in range(10):
if i == 5:
break # Stops loop when i is 5
print(i)

Conclusion: Control flow statements are the building blocks of logical programming. Conditionals allow for
decision-making, while loops facilitate efficient repetition of tasks without rewriting code.

3. Explain List operations and List methods in Python with examples.

Answer:

1. Introduction to Python Lists A List is a collection of items ordered in a sequence. It is very flexible and is
one of the most used data types in Python.

 Ordered: The items have a defined order, and that order will not change.
 Mutable: We can change, add, and remove items in a list after it has been created.
 Heterogeneous: A list can hold items of different data types (integers, strings, floats, etc.).

Syntax: Lists are created by placing elements inside square brackets [], separated by commas.

Python

my_list = [1, "Hello", 3.14, True]

2. Basic List Operations

These are operations performed using operators or basic syntax.


A. Indexing (Accessing Elements)

You can access individual elements using their index. Python supports both positive (0 to n-1) and negative
indexing (-1 starts from the end).

 Example:

Python

L = ['a', 'b', 'c', 'd']


print(L[0]) # Output: 'a'
print(L[-1]) # Output: 'd' (Last element)
B. Slicing (Accessing a Range)

Slicing allows you to get a sub-list.

 Syntax: List[start : stop : step]


 Example:

Python

nums = [0, 1, 2, 3, 4, 5]
print(nums[1:4]) # Output: [1, 2, 3] (Stop index is excluded)
print(nums[:3]) # Output: [0, 1, 2]
C. Concatenation (+)

Joins two lists together.

 Example:

Python

L1 = [1, 2]
L2 = [3, 4]
print(L1 + L2) # Output: [1, 2, 3, 4]
D. Repetition (*)

Repeats the list elements a specified number of times.

 Example:

Python

L = ["Hi"]
print(L * 3) # Output: ['Hi', 'Hi', 'Hi']
E. Membership (in)

Checks if an item exists in the list. Returns True or False.

 Example:

Python

L = [10, 20, 30]


print(20 in L) # Output: True

3. Built-in List Methods


Python provides several built-in methods to manipulate lists. These are crucial for the 13-mark answer.

A. Adding Elements

1. append(element): Adds a single element to the end of the list.

Python

L = [1, 2]
[Link](3)
# L is now [1, 2, 3]

2. extend(iterable): Adds elements of a list (or any iterable) to the end of the current list.

Python

L = [1, 2]
[Link]([3, 4])
# L is now [1, 2, 3, 4]

3. insert(index, element): Inserts an element at a specified position.

Python

L = [1, 3]
[Link](1, 2) # Insert 2 at index 1
# L is now [1, 2, 3]
B. Removing Elements

1. remove(value): Removes the first occurrence of the specified value.

Python

L = [10, 20, 30, 20]


[Link](20)
# L is now [10, 30, 20]

2. pop(index): Removes and returns the element at the specified index. If no index is specified, it
removes the last item.

Python

L = [10, 20, 30]


val = [Link](1)
# val is 20, L is now [10, 30]

3. clear(): Removes all elements from the list.

Python

[Link]()
# L is now []
C. Utility Methods

1. sort(): Sorts the list in ascending order by default.

Python
L = [3, 1, 2]
[Link]()
# L is now [1, 2, 3]

2. reverse(): Reverses the order of the list.

Python

L = [1, 2, 3]
[Link]()
# L is now [3, 2, 1]

3. count(value): Returns the number of times a value appears in the list.

Python

L = [1, 1, 2, 3]
print([Link](1)) # Output: 2

4. index(value): Returns the index of the first occurrence of the specified value.

Python

L = [10, 20, 30]


print([Link](20)) # Output: 1

4. Elaborate on Object-Oriented Programming (OOP) concepts in Python: Classes, Objects, and


Inheritance.

Answer:

1. Introduction to OOP

Object-Oriented Programming (OOP) is a programming paradigm that structures programs so that properties
and behaviors are bundled into individual objects. It models real-world entities (like a Car, Student, or
Employee) into code.

2. Class and Object

A. Class (The Blueprint)

A Class is a user-defined data type that acts as a blueprint or template for creating objects. It defines the
attributes (variables) and methods (functions) that the objects created from the class will have.

 It does not allocate memory when defined.


 Syntax:

Python

class ClassName:
# attributes
# methods
B. Object (The Instance)

An Object is an instance of a class. When a class is defined, no memory is allocated until an object is created.
An object has three characteristics: State (attributes), Behavior (methods), and Identity.
 Syntax: object_name = ClassName()

Example Program: Class and Object


Python

class Student:
# Constructor method to initialize attributes
def __init__(self, name, roll_no):
[Link] = name
self.roll_no = roll_no

# Method to display details


def display(self):
print("Name:", [Link])
print("Roll No:", self.roll_no)

# Creating Objects
s1 = Student("Alice", 101)
s2 = Student("Bob", 102)

# Accessing methods
[Link]()
[Link]()

3. Inheritance

Inheritance is a powerful feature of OOP that allows a class (Child Class) to derive or inherit the properties and
methods of another class (Parent Class).

 Parent Class (Base Class): The class being inherited from.


 Child Class (Derived Class): The class that inherits the properties.
 Advantage: It promotes Code Reusability. You don't have to rewrite the same code again.

Syntax:

Python

class ParentClass:
# body of parent
class ChildClass(ParentClass):
# body of child

4. Types of Inheritance

Python supports several types of inheritance. You should explain these with block diagrams.

A. Single Inheritance

A child class inherits from only one parent class.

 Structure: Parent A $\rightarrow$ Child B


 Example Code:

Python

class Animal:
def speak(self):
print("Animal Speaking")

class Dog(Animal): # Dog inherits from Animal


def bark(self):
print("Dog Barks")

d = Dog()
[Link]() # Calls Parent method
[Link]() # Calls Child method
B. Multiple Inheritance

A child class inherits from more than one parent class. Python supports this (unlike Java).

 Structure: Parent A + Parent B $\rightarrow$ Child C


 Example Code:

Python

class Father:
def height(self):
print("Tall height")

class Mother:
def color(self):
print("Fair color")

class Child(Father, Mother): # Inherits from both


pass

c = Child()
[Link]()
[Link]()
C. Multilevel Inheritance

A child class inherits from a parent, who in turn inherits from a grandparent. It forms a chain.

 Structure: Grandparent A $\rightarrow$ Parent B $\rightarrow$ Child C


 Example: A SportsCar inherits from Car, which inherits from Vehicle.

D. Hierarchical Inheritance

Multiple child classes inherit from a single parent class.

 Structure: Parent A $\rightarrow$ Child B, Child C, Child D


 Example: Cat and Dog both inherit from Animal.

5. Explain Dictionaries in Python. Discuss the various methods to access and modify dictionary elements.

Answer:

1. Introduction to Dictionaries

A Dictionary in Python is an unordered collection of data values, used to store data values like a map. Unlike
other Data Types that hold only a single value as an element, Dictionary holds a key:value pair.

 Key-Value Pair: Each element is defined by a unique key that maps to a specific value.
 Mutable: Dictionaries can be changed (add, remove, or modify items) after creation.
 Unordered: Items are not stored in a specific index order (prior to Python 3.7).
 Unique Keys: Duplicate keys are not allowed. Keys must be immutable (e.g., Strings, Numbers,
Tuples).

Syntax:
Dictionaries are enclosed in curly braces {} and separated by commas.

Python

my_dict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964
}

2. Creating a Dictionary

You can create a dictionary by placing a sequence of elements within curly braces, separated by ‘comma’.

 Empty Dictionary:

Python

d = {}

 Dictionary with Integer Keys:

Python

d = {1: 'Apple', 2: 'Banana'}

 Using the dict() Constructor:

Python

d = dict(name="John", age=36)

3. Accessing Dictionary Elements

Since dictionaries are unordered, we cannot use integer indexes (like 0, 1) to access values. We use Keys.

A. Using Square Brackets []

Pass the key inside the brackets to retrieve the value.

Python

student = {'name': 'Alice', 'age': 20}


print(student['name']) # Output: Alice

 Note: If the key does not exist, this raises a KeyError.

B. Using the get() Method

Returns the value for the given key. If the key is not available, it returns None (or a default value you specify)
instead of an error.

Python

print([Link]('age')) # Output: 20
print([Link]('gender')) # Output: None
print([Link]('gender', 'N/A')) # Output: N/A

4. Modifying and Adding Elements


Dictionaries are mutable, so we can add new items or change the value of existing items.

A. Adding a New Item

Simply assign a value to a new key.

Python

student = {'name': 'Alice', 'age': 20}


student['city'] = 'New York'
# Result: {'name': 'Alice', 'age': 20, 'city': 'New York'}
B. Modifying an Existing Item

Assign a new value to an existing key.

Python

student['age'] = 21
# Result: {'name': 'Alice', 'age': 21, 'city': 'New York'}
C. Using update()

Updates the dictionary with elements from another dictionary or an iterable of key/value pairs.

Python

[Link]({'age': 22, 'grade': 'A'})

5. Removing Elements

There are several ways to remove items.

 pop(key): Removes the item with the specified key name and returns its value.

Python

[Link]("age")

 popitem(): Removes the last inserted item (in Python 3.7+).

Python

[Link]()

 del keyword: Deletes the item with the specified key name.

Python

del student["name"]

 clear(): Empties the entire dictionary.

Python

[Link]() # Result is {}

6. Important Dictionary Methods

These methods are essential for iterating through dictionaries.


Method Description Example

keys() Returns a list of all the keys. [Link]()

values() Returns a list of all the values. [Link]()

items() Returns a list of tuple pairs (key, value). [Link]()

copy() Returns a copy of the dictionary. d2 = [Link]()

Example of Iteration:

Python

d = {'a': 1, 'b': 2}

# Iterating over keys


for k in [Link]():
print(k)

# Iterating over Key-Value pairs


for k, v in [Link]():
print(k, ":", v)

PART C (15 Marks) - Descriptive Questions


1: Design a Python class Employee with attributes name, id, and salary. Include
methods to calculate Net Salary and display details.

1. Introduction & Theoretical Background

To solve this problem, we utilize the principles of Object-Oriented Programming (OOP).


OOP is a programming paradigm based on the concept of "objects," which can contain data
(attributes) and code (methods).

Class (The Blueprint): A Class is a user-defined data type that acts as a blueprint for creating
objects. In this scenario, we define a class named Employee. It serves as a template that
defines what data an employee has (Name, ID, Salary) and what actions they can perform
(Calculate Salary, Show Details).

Object (The Instance): An object is a specific instance of a class. For example, if Employee is
the class, then Employee("John", 101, 50000) is an object representing a specific person.
The _init_ Method (Constructor): This is a special method in Python. It is automatically
invoked when a new object is created. We use it to initialize the object's attributes.

The self Keyword: In Python, self represents the instance of the class. It binds the attributes
with the specific arguments provided. For instance, [Link] = name ensures that the name
belongs to that specific object, not another one.

Encapsulation: This program demonstrates encapsulation by wrapping data (variables) and


methods (functions) together into a single unit (the Class).

2. Problem Logic & Algorithm

Before writing the code, explain the step-by-step logic. This shows the evaluator you
understand the flow

Step 1: Define a class named Employee.

Step 2: Inside the class, define the _init_ method with parameters: self, name, empid, and
salary.

Assign these parameters to instance variables: [Link], [Link], [Link].

Step 3: Define a method calculate_net_salary(self).

Logic:

Calculate HRA (House Rent Allowance) as 10% of basic salary (0.10 * salary).

Calculate Tax as 5% of basic salary (0.05 * salary).

Calculate Net Salary = Basic Salary + HRA - Tax.

Return the final value.

Step 4: Define a method display(self) to print the employee's name, ID, and the calculated net
salary.

Step 5: Main Execution:

Create an object (instance) of the Employee class by passing sample data.

Call the display() method to show the output.

3. Program Implementation

Here is the code. It is concise but covers all requirements.

Python

class Employee:

# 1. The Constructor: Initializes the data


def _init_(self, name, empid, basic_salary):

[Link] = name

[Link] = empid

self.basic_salary = basic_salary

# 2. Method to calculate salary logic

def calculate_net_salary(self):

# Allowances (HRA) is 10%, Tax is 5%

hra = 0.10 * self.basic_salary

tax = 0.05 * self.basic_salary

net_salary = self.basic_salary + hra - tax

return net_salary

# 3. Method to display the final details

def display(self):

print("--- Employee Details ---")

print("Name :", [Link])

print("Employee ID:", [Link])

print("Basic Pay :", self.basic_salary)

# We call the calculation method inside the print statement

print("Net Salary :", self.calculate_net_salary())

# --- Driver Code (Main execution starts here) ---

# Creating an object 'emp1' for the class Employee

emp1 = Employee("Arun Kumar", 1024, 25000)

# Calling the display method

[Link]()

4. Output

Always write the expected output at the end of your answer.

Plaintext

--- Employee Details ---


Name : Arun Kumar

Employee ID: 1024

Basic Pay : 25000

Net Salary : 26250.0

5. Calculation Explanation (For manual verification)

You can add this small section to fill space and show thoroughness:

Basic: 25,000

HRA (10%): 2,500

Tax (5%): 1,250

Net: 25,000 + 2,500 - 1,250 = 26,250

2: Write a Python program that accepts a sentence and performs the following operations:

1. Count the number of vowels.


2. Count the number of words.
3. Reverse each word in the sentence.

1. Introduction & Theoretical Concepts

To solve this problem, we need to manipulate text data. In Python, text is handled using Strings. A
thorough understanding of String methods and Control Flow is required.

 Strings in Python: A string is a sequence of characters enclosed in quotes. Strings are


Immutable, meaning once created, they cannot be modified in place. To "change" a string,
we actually create a new one.
 String Traversal: We can iterate (loop) through a string one character at a time using a for
loop to check conditions (like checking if a letter is a vowel).
 The split() Method: This is a powerful string method used to break a long sentence into
smaller chunks called tokens (words). By default, [Link]() separates the string
wherever it finds white space and returns a List of words.
 String Slicing (Reversing): Python provides a unique slicing feature. The syntax is
string[start : stop : step].
o To reverse a string, we use a step of -1, i.e., string[::-1]. This reads the string from the
end to the start.

2. Algorithm (Step-by-Step Logic)

Before writing code, we outline the logical flow:

 Step 1: Start the program.


 Step 2: Accept a sentence as input from the user and store it in a variable text.
 Step 3 - Counting Vowels:
o Initialize a counter variable vowel_count = 0.
o Define a reference string vowels = "aeiouAEIOU".
o Loop through every character in text. If the character exists in vowels, increment
vowel_count.
 Step 4 - Counting Words:
o Use the split() method on text to generate a list of words.
o Calculate the length of this list using len() to get the total number of words.
 Step 5 - Reversing Words:
o Create an empty list reversed_list.
o Loop through each word in the split list.
o Reverse the current word using slicing word[::-1].
o Add the reversed word to reversed_list.
 Step 6: Display the total vowels, total words, and the final reversed sentence.

3. visual Representation of Logic

It is helpful to visualize how the data changes at each step.

1. Input: "Hello World"


2. Split: ['Hello', 'World']
3. Reverse Process:
o 'Hello' becomes 'olleH'
o 'World' becomes 'dlroW'
4. Join: "olleH dlroW"

4. Python Program Implementation

Here is the clean, commented code.

Python
# Function to perform string operations
def process_sentence():
# 1. Accept Input
sentence = input("Enter a sentence: ")

# --- Task 1: Count Vowels ---


vowels = "aeiouAEIOU"
v_count = 0

# Iterate through every character


for char in sentence:
if char in vowels:
v_count = v_count + 1

# --- Task 2: Count Words ---


# split() breaks the string at spaces into a list
word_list = [Link]()
total_words = len(word_list)

# --- Task 3: Reverse Each Word ---


reversed_words = []

for word in word_list:


# Slicing with step -1 reverses the string
rev_word = word[::-1]
reversed_words.append(rev_word)

# Joining the list back into a string for display


final_output = " ".join(reversed_words)

# --- Display Results ---


print("\n--- RESULTS ---")
print(f"Total Vowels : {v_count}")
print(f"Total Words : {total_words}")
print(f"Original Sentence: {sentence}")
print(f"Reversed Sentence: {final_output}")

# Driver Code
process_sentence()

5. Output Trace

Writing a sample input and output is crucial for full marks.

Scenario:

 User Input: Python is Easy

Execution Trace:

1. Vowels Count:
o P (no), y (no), t (no), h (no), o (yes), n (no)...
o Vowels found: 'o', 'i', 'E', 'a' = 4
2. Word Count:
o split() creates ['Python', 'is', 'Easy']
o Length = 3
3. Reversal:
o Python -> nohtyP
o is -> si
o Easy -> ysaE

Final Output Displayed:

Plaintext
Enter a sentence: Python is Easy

--- RESULTS ---


Total Vowels : 4
Total Words : 3
Original Sentence: Python is Easy
Reversed Sentence: nohtyP si ysaE

3: Explain the concept of Exception Handling in Python. Write a program to handle the
"Division by Zero" error.

1. Introduction & Theoretical Concepts

To write a robust program, handling errors gracefully is essential. In Python, this is achieved through
Exception Handling.

 What is an Exception? An exception is an event that occurs during the execution of a


program (runtime) that disrupts the normal flow of instructions. Unlike Syntax Errors
(which are grammatical mistakes detected before the program runs), Exceptions happen when
the syntax is correct, but the logic fails (e.g., trying to divide a number by zero or opening a
file that doesn't exist).
 Why handle exceptions? If an exception is not handled, the program crashes immediately
and shows a technical error message to the user. Exception handling allows the program to
"catch" the error, display a friendly message, and continue running or exit gracefully.
 The Four Pillars of Exception Handling:
1. try block: Contains the code that might cause an error.
2. except block: Contains the code that runs if a specific error occurs in the try
block.
3. else block: Runs only if the try block is successful (no errors).
4. finally block: Runs always, regardless of whether an error occurred or not. It is
generally used for cleanup actions (like closing a file or database connection).

2. Logical Flow (Algorithm)

When the program enters a try block:

1. It attempts to execute the code.


2. If no error occurs: It skips the except block and executes the else block (if present), then
the finally block.
3. If an error occurs: It stops the try block immediately and jumps to the except block.
After the except block finishes, it runs the finally block.

3. Problem Scenario: Division by Zero

The specific problem is to divide two numbers.

 Risk: If the user enters 0 as the denominator, mathematically, the result is undefined. Python
raises a ZeroDivisionError.
 Risk 2: If the user enters text (e.g., "hello") instead of a number, Python raises a
ValueError. We must handle both to make the program "crash-proof."

4. Python Program Implementation

Here is the code using all four blocks for a complete answer.

Python
def division_calculator():
print("--- Safe Division Program ---")

while True: # Optional: Keeps asking until valid input


try:
# 1. THE TRY BLOCK
# We place risky code here
numerator = int(input("Enter Numerator: "))
denominator = int(input("Enter Denominator: "))

# This calculation is the critical point


result = numerator / denominator

except ZeroDivisionError:
# 2. CATCH SPECIFIC ERROR (Division by 0)
print(">> Error: You cannot divide by Zero! Please try
again.")

except ValueError:
# 3. CATCH WRONG INPUT TYPE (Text instead of number)
print(">> Error: Invalid input! Please enter numbers
only.")

else:
# 4. THE ELSE BLOCK
# Executes only if NO exceptions occurred
print(f">> Success! The result is: {result}")
break # Exit the loop on success

finally:
# 5. THE FINALLY BLOCK
# Executes always
print("--- Execution attempt finished ---\n")

# Driver Code
division_calculator()
5. Output Trace (Case Studies)

In a 15-mark question, showing different test cases proves you understand how the flow changes
based on input.

Case 1: Normal Execution (Success)

Plaintext
--- Safe Division Program ---
Enter Numerator: 10
Enter Denominator: 2
>> Success! The result is: 5.0
--- Execution attempt finished ---

Case 2: Division by Zero (Exception Caught)

Plaintext
--- Safe Division Program ---
Enter Numerator: 10
Enter Denominator: 0
>> Error: You cannot divide by Zero! Please try again.
--- Execution attempt finished ---

Case 3: Invalid Input (Value Error)

Plaintext
--- Safe Division Program ---
Enter Numerator: Ten
>> Error: Invalid input! Please enter numbers only.
--- Execution attempt finished ---
6. Key Takeaways for the Evaluator

 Robustness: The program does not crash even if the user gives bad input.
 Hierarchy: Specific errors (ZeroDivisionError) are handled separately from generic
errors, providing better feedback to the user.
 Cleanup: The finally block ensures that any necessary concluding steps happen
(represented here by the "finished" message).
4: Explain the concept of Exception Handling in Python. Write a program to handle
"Division by Zero" error.
1. Introduction to Exception Handling

In Python, there are two types of errors that occur during coding:

1. Syntax Errors: Errors caused by wrong grammar (e.g., missing a colon, wrong indentation).
The code will not run at all.
2. Exceptions (Runtime Errors): Errors that occur during execution. The code syntax is
correct, but something goes wrong while the program is running (e.g., trying to divide by
zero, trying to open a file that doesn't exist).

Exception Handling is the method of handling these runtime errors gracefully so that the program
does not crash abruptly. Instead of showing a scary technical error message to the user, the program
catches the error and displays a friendly message or takes alternative action.

2. Blocks of Exception Handling

Python uses four main keywords (blocks) to handle exceptions:

A. try block

 This block contains the "suspicious" code that might raise an exception.
 Python "tries" to execute this code. If everything is fine, it skips the except block.
 If an error occurs here, execution immediately jumps to the except block.

B. except block

 This block contains the code that handles the error.


 It is only executed if an error occurred in the try block.
 You can have multiple except blocks to handle different types of errors (e.g., one for Math
errors, one for File errors).

C. else block (Optional)

 This block is executed only if NO exceptions occurred in the try block.


 It is useful for code that should run only when the operation is successful.

D. finally block (Optional)

 This block is always executed, regardless of whether an error occurred or not.


 It is typically used for "cleanup" actions, like closing a file or closing a database connection.

3. Python Program: Handling Division by Zero

Here is the complete program illustrating all the blocks discussed above.

Problem: We want to divide two numbers provided by the user.

 Risk 1: User might enter 0 as the denominator (Math error).


 Risk 2: User might enter text like "hello" instead of a number (Value error).

The Code:

Python
def division_program():
print("--- Start of Program ---")

try:
# 1. We ask the user for input inside the try block
numerator = int(input("Enter the numerator: "))
denominator = int(input("Enter the denominator: "))

# 2. We attempt the division


result = numerator / denominator

except ZeroDivisionError:
# 3. This runs if the user enters 0 for the denominator
print("Error: You cannot divide a number by zero!")

except ValueError:
# 4. This runs if the user enters text instead of numbers
print("Error: Invalid input! Please enter numeric values only.")

else:
# 5. This runs ONLY if the division was successful
print(f"Success! The result is: {result}")

finally:
# 6. This runs NO MATTER WHAT happens above
print("--- Execution Completed (Cleaning up resources) ---")

# Calling the function


division_program()
4. Explanation of the Output Scenarios

To get full marks, you should explain the different outputs this code generates.

Scenario 1: Successful Execution

 Input: Numerator = 10, Denominator = 2


 Flow:
1. try block executes successfully (10 / 2 = 5.0).
2. except blocks are skipped.
3. else block prints the result.
4. finally block prints the closing message.
 Output:

Plaintext

Success! The result is: 5.0


--- Execution Completed (Cleaning up resources) ---

Scenario 2: Zero Division Error

 Input: Numerator = 10, Denominator = 0


 Flow:
1. Inside try, 10 / 0 triggers a ZeroDivisionError.
2. Execution jumps immediately to except ZeroDivisionError.
3. "Error: You cannot divide a number by zero!" is printed.
4. else block is skipped.
5. finally block prints the closing message.
 Output:

Plaintext

Error: You cannot divide a number by zero!


--- Execution Completed (Cleaning up resources) ---

Scenario 3: Value Error (Wrong Input Type)

 Input: Numerator = "Ten", Denominator = 5


 Flow:
1. Inside try, int("Ten") fails because "Ten" is a string, not a number.
2. It triggers a ValueError.
3. Execution jumps to except ValueError.
4. "Error: Invalid input!..." is printed.
5. else is skipped.
6. finally runs.

UNIT – 2
PART A (2 Marks) - Short Answer Questions

1. Define Abstract Data Type (ADT).

An Abstract Data Type (ADT) is a logical description of a data structure. It defines what operations
can be performed on the data but does not specify how these operations are implemented.

 Example: Stack ADT, Queue ADT, List ADT.

2. What is the difference between Shallow Copy and Deep Copy?

 Shallow Copy: Creates a new object but stores references to the original elements. Changes
to mutable elements in the copied object will affect the original object. (Module: [Link]())
 Deep Copy: Creates a new object and recursively copies all objects found in the original.
Changes to the copy do not affect the original. (Module: [Link]())

3. Define Asymptotic Notations. Name the three types.

Asymptotic notations are mathematical tools used to describe the running time or space complexity of
an algorithm as the input size grows.

1. Big O ($O$): Worst-case complexity (Upper bound).


2. Omega ($\Omega$): Best-case complexity (Lower bound).
3. Theta ($\Theta$): Average-case complexity (Tight bound).

4. What is the Divide and Conquer strategy?

It is an algorithm design paradigm that works by:

1. Divide: Breaking the problem into smaller sub-problems.


2. Conquer: Solving the sub-problems recursively.
3. Combine: Merging the solutions of sub-problems to get the final solution.
o Example: Merge Sort, Binary Search.

5. List the operations of a Stack ADT.

A Stack follows LIFO (Last In First Out).

 push(item): Adds an element to the top.


 pop(): Removes and returns the top element.
 peek() / top(): Returns the top element without removing it.
 isEmpty(): Checks if the stack is empty.

6. Define Recursion. Give the base condition for factorial.

Recursion is a programming technique where a function calls itself to solve a problem.

 Factorial Base Condition: if n == 0 or n == 1: return 1

7. What is a Namespace in Python?

A namespace is a mapping from names to objects. It ensures that object names in a program are
unique and can be used without conflict. Examples: Local Namespace, Global Namespace, Built-in
Namespace.

8. What is a Queue? List its applications.

A Queue follows FIFO (First In First Out). Elements are inserted at the rear and deleted from the
front.

 Applications: Printer scheduling, CPU task scheduling, Breadth-First Search (BFS) in graphs.

PART B (13 Marks) - Descriptive & Code Questions

1: Explain the Stack ADT and its implementation using a Python List.
1. Definition of Stack ADT

A Stack is a linear data structure that follows the LIFO (Last In First Out) principle.

 This means the element that is inserted last will be the first one to be removed.
 A real-life example is a stack of plates: you place a new plate on top, and you also remove
the plate from the top. You cannot remove a plate from the middle without removing the top
ones first.

Key Characteristics:

 Insertion and Deletion happen at the same end, known as the TOP.
 It is often called a "Push-Down List".

2. Operations on Stack

A Stack ADT must support the following fundamental operations:

1. Push(item): Adds an element item to the top of the stack.


o Condition: If the stack is full, it results in a "Stack Overflow" (not applicable in
Python lists as they are dynamic).
2. Pop(): Removes and returns the top element from the stack.
o Condition: If the stack is empty, it results in a "Stack Underflow".
3. Peek() (or Top()): Returns the top element without removing it.
4. isEmpty(): Returns True if the stack is empty, False otherwise.
5. Size(): Returns the total number of elements in the stack.

3. Diagrammatic Representation

(You should draw a simple diagram like this in the exam)

Plaintext
| |
| 30 | <--- TOP (Last Element Pushed)
| 20 |
| 10 | <--- Bottom (First Element Pushed)
+-------+

 Push(40): 40 goes above 30. TOP moves to 40.


 Pop(): 30 is removed. TOP moves down to 20.

4. Python Implementation (using List)

In Python, a simple List can be used as a Stack because it supports append() (add to end/top) and
pop() (remove from end/top).

The Code:

Python
class Stack:
def __init__(self):
"""Initialize an empty stack"""
[Link] = []

def is_empty(self):
"""Check if the stack is empty"""
return [Link] == []

def push(self, item):


"""Add an item to the top of the stack"""
[Link](item)
print(f"Pushed: {item}")

def pop(self):
"""Remove and return the top item"""
if self.is_empty():
return "Error: Stack Underflow"
return [Link]()

def peek(self):
"""Return the top item without removing it"""
if self.is_empty():
return "Error: Stack is Empty"
return [Link][-1]

def size(self):
"""Return the number of elements in the stack"""
return len([Link])

def display(self):
"""Print the stack contents"""
print("Current Stack:", [Link])

# --- Driver Code (To demonstrate usage) ---


s = Stack()

[Link](10) # Stack: [10]


[Link](20) # Stack: [10, 20]
[Link](30) # Stack: [10, 20, 30]
[Link]()

print("Popped Item:", [Link]()) # Removes 30


print("Top Item:", [Link]()) # Shows 20
[Link]()
5. Example Execution Flow

If we run the above code, the internal operations work as follows:

1. [Link](10): List becomes [10]. Top is 10.


2. [Link](20): List becomes [10, 20]. Top is 20.
3. [Link](): The last element (20) is removed. List becomes [10]. Top becomes 10.
4. [Link](): Returns 10 (the last element in the list).

6. Complexity Analysis

For a Stack implemented using a Python List:

Operation Time Complexity Reason

Push() O(1) Appending to the end of a list is constant time (amortized).

Pop() O(1) Removing from the end of a list is constant time.

Peek() O(1) Accessing the last index is direct access.

Space O(n) Space grows linearly with the number of elements n.

7. Applications of Stack

To make your answer stand out, list 2-3 applications:

1. Function Call Management: Python uses a stack (Call Stack) to manage function calls and
recursion.
2. Expression Evaluation: Used to convert Infix expressions to Postfix (Reverse Polish
Notation).
3. Undo Mechanism: Editors use stacks to store changes so you can "Undo" (Pop) the last
action.
4. Balanced Parentheses: Checking if code has matching () or {}.

2: Explain the Queue ADT and its operations with a Python program.
1. Definition of Queue ADT

A Queue is a linear data structure that follows the FIFO (First In First Out) principle.

 This means the element that is inserted first will be the first one to be removed.
 A real-life example is a queue of people standing at a ticket counter: the person who comes
first gets the ticket first and leaves the line first.

Key Characteristics:

 Rear (Tail): The end where elements are added (Enqueued).


 Front (Head): The end where elements are removed (Dequeued).

2. Operations on Queue

A Queue ADT must support the following fundamental operations:

1. Enqueue(item): Adds an element item to the Rear of the queue.


o Condition: If the queue is full (in static arrays), it causes an "Overflow".
2. Dequeue(): Removes and returns the element from the Front of the queue.
o Condition: If the queue is empty, it causes an "Underflow".
3. isEmpty(): Checks if the queue is empty. Returns True or False.
4. Size(): Returns the total number of elements currently in the queue.
5. Front() / Peek(): Returns the front element without removing it.

3. Diagrammatic Representation

 Initial State: Empty Queue.


 Enqueue(10): Queue: [10] (Front=10, Rear=10)
 Enqueue(20): Queue: [10, 20] (Front=10, Rear=20)
 Dequeue(): 10 leaves. Queue: [20] (Front=20, Rear=20)

4. Python Implementation (Two Approaches)

In Python, you can implement a Queue using a standard list, but it is not efficient for large data
because inserting/deleting at the beginning of a list takes $O(n)$ time (shifting elements). The
efficient way is using [Link].

Approach 1: Using Standard List (Simple but Slower)

Python
class QueueList:
def __init__(self):
[Link] = []

def enqueue(self, item):


"""Insert at the end (Rear)"""
[Link](item)
print(f"Enqueued: {item}")
def dequeue(self):
"""Remove from the beginning (Front)"""
if self.is_empty():
return "Error: Queue is Empty"
# pop(0) removes the first element
return [Link](0)

def is_empty(self):
return len([Link]) == 0

def size(self):
return len([Link])

def display(self):
print("Queue:", [Link])

# --- Driver Code ---


q = QueueList()
[Link]("A")
[Link]("B")
[Link]() # Output: ['A', 'B']
print("Dequeued:", [Link]()) # Output: A

Approach 2: Using [Link] (Efficient & Recommended)

Note: For a full 13 marks, mentioning this method shows deeper knowledge.

Python
from collections import deque

class Queue:
def __init__(self):
# deque is a double-ended queue, optimized for adding/removing from both ends
[Link] = deque()

def enqueue(self, item):


[Link](item)
print(f"Enqueued: {item}")

def dequeue(self):
if self.is_empty():
return "Error: Queue Underflow"
return [Link]() # Optimized O(1) removal

def is_empty(self):
return len([Link]) == 0

def size(self):
return len([Link])

# --- Driver Code ---


q = Queue()
[Link](100)
[Link](200)
print([Link]()) # Removes 100
5. Complexity Analysis
List Implementation Deque Implementation
Operation Reason
pop(0) popleft()

Enqueue O(1) O(1) Appending to end is fast.

pop(0) shifts all elements. popleft()


Dequeue O(n) O(1)
does not shift.

Space O(n) O(n) Stores n elements.

6. Applications of Queue

1. Job Scheduling: Operating systems use queues to schedule processes (CPU Scheduling).
2. Printer Spooling: Documents sent to a printer are lined up in a queue.
3. Breadth-First Search (BFS): Graph traversal algorithms use queues to explore nodes.
4. Handling Requests: Web servers use queues to handle incoming user requests in order.

Question: Explain "Divide and Conquer" strategy with Merge Sort as an


example. Analyze its complexity.
1. What is the Divide and Conquer Strategy?

Divide and Conquer is an algorithm design paradigm. It solves a problem by breaking it down into
smaller sub-problems, solving them recursively, and then combining their solutions to get the final
result.

It consists of three main steps:

1. Divide: Break the original problem into smaller sub-problems that are similar to the original
problem but smaller in size.
2. Conquer: Solve the sub-problems recursively. (Base case: if the problem is small enough,
solve it directly).
3. Combine: Merge the solutions of the sub-problems to create the solution for the original
problem.

2. Merge Sort: An Example of Divide and Conquer

Merge Sort is a classic sorting algorithm that perfectly demonstrates this strategy.

How Merge Sort Works:

 Divide: The array is divided into two halves using the middle index: mid = len(array) // 2.
 Conquer: We recursively call Merge Sort on both the left half and the right half. This
continues until the sub-arrays have only one element (which is already sorted).
 Combine: The sorted halves are merged back together to form a complete sorted array.

Diagrammatic Representation

Example Trace:
Input List: [38, 27, 43, 3]

1. Divide:
Split into [38, 27] and [43, 3]
o
Split again into [38], [27], [43], [3] (Base Case reached).
o
2. Conquer & Combine (Merge):
o Merge [38] and [27] $\rightarrow$ [27, 38]
o Merge [43] and [3] $\rightarrow$ [3, 43]
o Merge [27, 38] and [3, 43] $\rightarrow$ [3, 27, 38, 43] (Final Sorted List).

3. Python Implementation

Here is the standard Python code for Merge Sort.

Python
def merge_sort(arr):
# Base Case: If list has 0 or 1 element, it is already sorted
if len(arr) <= 1:
return arr

# Step 1: Divide
mid = len(arr) // 2
left_half = arr[:mid]
right_half = arr[mid:]

# Step 2: Conquer (Recursive calls)


left_sorted = merge_sort(left_half)
right_sorted = merge_sort(right_half)

# Step 3: Combine (Merge)


return merge(left_sorted, right_sorted)

def merge(left, right):


"""Helper function to merge two sorted lists"""
sorted_list = []
i=j=0

# Compare elements from both lists and add smaller one to result
while i < len(left) and j < len(right):
if left[i] < right[j]:
sorted_list.append(left[i])
i += 1
else:
sorted_list.append(right[j])
j += 1

# Add remaining elements (if any)


sorted_list.extend(left[i:])
sorted_list.extend(right[j:])

return sorted_list

# --- Driver Code ---


numbers = [38, 27, 43, 3, 9, 82, 10]
print("Original:", numbers)
sorted_numbers = merge_sort(numbers)
print("Sorted: ", sorted_numbers)
4. Complexity Analysis

For a 13-mark question, deriving the complexity is crucial.

Time Complexity

The running time of Merge Sort can be expressed using the recurrence relation:

$$T(n) = 2T(n/2) + O(n)$$

1. Dividing: Calculating the midpoint takes constant time $O(1)$.


2. Conquering: We recursively solve two sub-problems of size $n/2$. This gives us $2T(n/2)$.
3. Combining: Merging two sorted arrays of size $n/2$ takes linear time $O(n)$ because we
iterate through the elements once.

Solving the Recurrence:

Using the "Master Theorem" or "Recursion Tree Method":

 At level 1: Cost is $cn$


 At level 2: Cost is $cn/2 + cn/2 = cn$
 ...
 Height of tree: $\log_2 n$ (because we divide by 2 at every step).

Total Cost = (Cost per level) $\times$ (Number of levels)

$$Total Cost = n \times \log n$$

Final Time Complexity:

 Best Case: $O(n \log n)$


 Worst Case: $O(n \log n)$
 Average Case: $O(n \log n)$

Space Complexity

 Merge Sort is not an in-place sort. It requires auxiliary (extra) memory to store the temporary
sub-arrays during the merge process.
 Space Complexity: $O(n)$

Question: Discuss Shallow Copy and Deep Copy in Python with a clear code example.
1. Introduction: Assignment vs. Copying

In Python, if you use the assignment operator (=) to assign one list to another (e.g., list2 = list1), it
does not create a copy. Instead, it creates a reference (or alias). Both variables point to the same
memory location.

To create actual duplicates of data, Python provides the copy module, which supports two types of
copying:

1. Shallow Copy
2. Deep Copy
2. Shallow Copy

A Shallow Copy creates a new object (a new container), but it inserts references into it to the objects
found in the original.

 Function: [Link](x)
 Behavior:
o It creates a copy of the outer list.
o However, the inner elements (like nested lists) are shared between the original and
the copy.
o If you modify a nested object in the copy, the change will reflect in the original
object.

3. Deep Copy

A Deep Copy creates a new object and then recursively copies the objects found in the original.

 Function: [Link](x)
 Behavior:
o It creates a fully independent copy of the original object and all its children.
o It copies the outer list and the inner nested lists.
o If you modify a nested object in the copy, the change will NOT reflect in the original
object.

4. Python Program: Demonstrating Shallow vs. Deep Copy

To demonstrate the difference, we must use a nested list (a list inside a list), because flat lists (lists
with only numbers) behave similarly for both operations.

Python
import copy

def demonstrate_copying():
print("--- 1. ASSIGNMENT OPERATION (=) ---")
original = [1, [2, 3], 4]
alias = original # Just a reference, not a copy

alias[0] = 999 # Changing alias changes original


print(f"Original: {original}")
print(f"Alias: {alias}")
print("Result: Both changed because they point to same memory.\n")

# Resetting original for next example


original = [1, [2, 3], 4]

print("--- 2. SHALLOW COPY ([Link]) ---")


# Create a shallow copy
shallow = [Link](original)

# Modify outer element (safe)


shallow[0] = 777
# Modify INNER nested element (Dangerous - affects original)
shallow[1][0] = 'X'

print(f"Original: {original}")
print(f"Shallow: {shallow}")
print("Result: Outer element change didn't affect original, but NESTED change did.\n")

# Resetting original for next example


original = [1, [2, 3], 4]

print("--- 3. DEEP COPY ([Link]) ---")


# Create a deep copy
deep = [Link](original)

# Modify outer element


deep[0] = 555
# Modify INNER nested element
deep[1][0] = 'Z'

print(f"Original: {original}")
print(f"Deep: {deep}")
print("Result: Original remains completely untouched.")

# Run the function


demonstrate_copying()

5. Explanation of the Output

A. In Shallow Copy:

 Code: shallow[0] = 777


o The outer list is a new object, so changing index 0 affects only shallow.
 Code: shallow[1][0] = 'X'
o Index 1 is a list [2, 3]. In a shallow copy, both original and shallow share the
reference to this same inner list.
o Therefore, changing it to 'X' changes it for original too.

B. In Deep Copy:

 Code: deep[1][0] = 'Z'


o deepcopy made a brand new duplicate of the inner list [2, 3] as well.
o The original and deep lists are completely disconnected.
o The original list retains its old values.

PART C (15 Marks) - Application/Problem Solving

1: Write a Python program to define a class Employee with attributes name, id, and
salary. Include methods to Initialize, Calculate Net Salary, and Display Details.
1. Problem Definition

We need to create a software blueprint (Class) for an Employee.

 Data (Attributes): Every employee has a Name, an Employee ID, and a Basic Salary.
 Actions (Methods):
1. __init__: To set up these values when a new employee is created.
2. calculate_net_salary: To perform math (Net Salary = Basic + Allowances -
Deductions).
3. display_details: To print the information neatly.

2. concept: Class and Object

 Class: The template. Think of it like a "Form" that has blank spaces for Name, ID, Salary.
 Object: The filled-in form. (e.g., Employee "John", ID 101).
 Encapsulation: We are bundling the data (salary, id) and the functions that use them into a
single unit.

3. Class Diagram

In an exam, drawing a simple box diagram like this adds value:

4. Python Program Implementation

Here is the complete code. I have added comments to explain each part.

Python
class Employee:
"""
A class to represent an Employee in an organization.
"""

# 1. The Constructor (__init__)


# This runs automatically when we create a new object (e.g., emp = Employee(...))
def __init__(self, name, emp_id, basic_salary):
[Link] = name # Public Attribute
self.emp_id = emp_id # Public Attribute
self.basic_salary = basic_salary

# We can calculate allowances immediately or in a separate method


# Let's assume HRA is 20% of basic and DA is 10% of basic
[Link] = 0.20 * basic_salary
[Link] = 0.10 * basic_salary

# 2. Method to Calculate Net Salary


def calculate_net_salary(self):
# Earnings
gross_salary = self.basic_salary + [Link] + [Link]

# Deductions (e.g., Provident Fund = 12% of Basic)


pf = 0.12 * self.basic_salary

# Net Salary Formula


net_salary = gross_salary - pf
return net_salary

# 3. Method to Display Details


def display_details(self):
print("\n" + "="*30)
print(" EMPLOYEE PAY SLIP ")
print("="*30)
print(f"Name : {[Link]}")
print(f"Employee ID : {self.emp_id}")
print(f"Basic Salary : Rs. {self.basic_salary:.2f}")
print(f"HRA (20%) : Rs. {[Link]:.2f}")
print(f"DA (10%) : Rs. {[Link]:.2f}")
print("-" * 30)

# Calling the calculation method inside the print statement


net = self.calculate_net_salary()
print(f"NET SALARY : Rs. {net:.2f}")
print("="*30)

# --- Driver Code (Main Part of Program) ---

# Creating Object 1
print("--- Creating First Employee Object ---")
emp1 = Employee("Alice", 101, 50000)
emp1.display_details()

# Creating Object 2
print("\n--- Creating Second Employee Object ---")
emp2 = Employee("Bob", 102, 35000)
emp2.display_details()

5. Trace / Output Generation

Scenario 1: Employee Alice

 Input: Name="Alice", ID=101, Basic=50000.


 Internal Calculations:
o HRA = $20\%$ of $50,000 = 10,000$
o DA = $10\%$ of $50,000 = 5,000$
o Gross = $50,000 + 10,000 + 5,000 = 65,000$
o PF (Deduction) = $12\%$ of $50,000 = 6,000$
o Net Salary = $65,000 - 6,000 = 59,000$

Output on Screen:

Plaintext
--- Creating First Employee Object ---

==============================
EMPLOYEE PAY SLIP
==============================
Name : Alice
Employee ID : 101
Basic Salary : Rs. 50000.00
HRA (20%) : Rs. 10000.00
DA (10%) : Rs. 5000.00
------------------------------
NET SALARY : Rs. 59000.00
==============================
6. Key Concepts to Explain (for 15 Marks)

To ensure full marks, write a short paragraph on these keywords used in the code:

1. class keyword: Used to define the blueprint.


2. self parameter:
o It represents the current instance of the class.
o [Link] means "The name variable that belongs to this specific object".
o Without self, Python wouldn't know if you mean Alice's name or Bob's name.
3. __init__ method:
o Also known as the Constructor.
o It initializes the object's state. It is the first thing that runs when Employee() is called.
4. Data Abstraction:
o The complexities of calculating tax, HRA, and DA are hidden inside the
calculate_net_salary method. The user just calls the method and gets the result.

Question: Analyze the Recursion in the "Tower of Hanoi" problem. Write


the algorithm/program and calculate the number of moves required for N
disks.
1. Problem Statement

The Tower of Hanoi is a classic mathematical puzzle consisting of three rods and n disks of different
sizes.

 Initial State: All disks are stacked on the first rod (Source) in decreasing order of size
(largest at the bottom, smallest at the top).
 Goal: Move the entire stack to the last rod (Destination).

The Three Rules:

1. Only one disk can be moved at a time.


2. Each move consists of taking the upper disk from one of the stacks and placing it on top of
another stack.
3. No disk may be placed on top of a smaller disk. (A larger disk must always be below a
smaller disk).

2. Recursive Logic (The Algorithm)

The problem might look complex for many disks, but it uses a simple "Divide and Conquer"
strategy. We can solve the problem for n disks if we know how to solve it for n-1 disks.

Let the three rods be:

 Source (A): Where disks start.


 Auxiliary (B): The helper rod.
 Destination (C): Where disks must end up.

The Algorithm has 3 Steps:

1. Move n-1 disks from Source (A) to Auxiliary (B). (Using C as the helper).
2. Move the nth (largest) disk directly from Source (A) to Destination (C).
3. Move the n-1 disks from Auxiliary (B) to Destination (C). (Using A as the helper).
3. Python Program Implementation

Here is the Python code using a recursive function.

Python
def tower_of_hanoi(n, source, destination, auxiliary):
"""
Function to solve Tower of Hanoi puzzle.
n: number of disks
source: rod where disks initially reside
destination: rod where disks must go
auxiliary: helper rod
"""
# BASE CASE: If there is only 1 disk, just move it
if n == 1:
print(f"Move disk 1 from {source} to {destination}")
return

# STEP 1: Move n-1 disks from Source to Auxiliary


tower_of_hanoi(n-1, source, auxiliary, destination)

# STEP 2: Move the nth disk from Source to Destination


print(f"Move disk {n} from {source} to {destination}")

# STEP 3: Move n-1 disks from Auxiliary to Destination


tower_of_hanoi(n-1, auxiliary, destination, source)

# --- Driver Code ---


n_disks = 3
print(f"--- Solution for {n_disks} Disks ---")
tower_of_hanoi(n_disks, 'A', 'C', 'B')

4. Trace (Dry Run) for N=3 Disks

To get full marks, you must show the step-by-step output for a small number like n=3.

Input: n=3, Source='A', Destination='C', Aux='B'

Step No. Logic Applied Action (Output)

1 Move (n-1) to Aux Move disk 1 from A to C

2 Move (n-1) to Aux Move disk 2 from A to B

3 Move (n-1) to Aux Move disk 1 from C to B

4 Move nth Disk Move disk 3 from A to C


Step No. Logic Applied Action (Output)

5 Move (n-1) to Dest Move disk 1 from B to A

6 Move (n-1) to Dest Move disk 2 from B to C

7 Move (n-1) to Dest Move disk 1 from A to C

Total Moves: 7

5. Complexity Analysis (Mathematical Proof)

This is the most critical part for a 15-mark question. You must derive the formula.

Calculating Number of Moves:

Let $T(n)$ be the number of moves required for $n$ disks.

Based on our algorithm:

1. Moving $n-1$ disks takes $T(n-1)$ moves.


2. Moving the largest disk takes $1$ move.
3. Moving $n-1$ disks again takes $T(n-1)$ moves.

Recurrence Relation:

$$T(n) = T(n-1) + 1 + T(n-1)$$


$$T(n) = 2T(n-1) + 1$$

Solving by Substitution:

 For $n=1$: $T(1) = 1$


 For $n=2$: $T(2) = 2(1) + 1 = 3$
 For $n=3$: $T(3) = 2(3) + 1 = 7$
 For $n=4$: $T(4) = 2(7) + 1 = 15$

From the pattern ($1, 3, 7, 15...$), we can see that:

$$T(n) = 2^n - 1$$

Time Complexity:

Since the number of moves grows exponentially with n, the time complexity is:$O(2^n)$
(Exponential Time)

Space Complexity:
The space complexity is determined by the maximum depth of the recursion stack (how many
function calls are waiting in memory).$O(n)$ (Linear Space)

Question: Implement a "Circular Queue" ADT. Explain why it is better than a simple
Queue implemented using Arrays.
1. The Problem with Linear Queues (Why do we need Circular?)

In a standard Linear Queue (implemented using a fixed-size array/list):

 We insert at the Rear and delete from the Front.


 As we delete elements, the Front pointer moves forward.
 The Issue: The empty spaces created at the beginning of the array cannot be reused.
 Scenario: If the Rear reaches the end of the array index, the system says "Queue Full"
(Overflow), even if there are empty slots at the beginning (freed by Dequeue operations).
 This is called False Overflow or Memory Wastage.

2. The Solution: Circular Queue

A Circular Queue is a linear data structure in which the operations are performed based on a FIFO
(First In First Out) principle, but the last position is connected back to the first position to make a
circle.

 It is also called a "Ring Buffer".


 It solves the major limitation of the normal queue by utilizing the empty spaces created by the
dequeue operation.

3. Logic & Formulas

In a circular queue of size N, we use Modulo Arithmetic (%) to wrap the pointers around.

1. Index Calculation:

Instead of just incrementing indices (rear + 1), we use:

oNext Rear Position: (rear + 1) % size


oNext Front Position: (front + 1) % size
2. Condition for Full (Overflow):
o if (rear + 1) % size == front
o This means the Rear is right behind the Front, so the circle is complete.
3. Condition for Empty (Underflow):
o if front == -1 (Assuming we initialize pointers to -1).

4. Python Implementation

Since Python lists are dynamic, we simulate a Fixed Size Circular Queue to demonstrate the concept
properly (as expected in an algorithm exam).

Python
class CircularQueue:
def __init__(self, size):
[Link] = size
# Initialize queue with None
[Link] = [None] * size
[Link] = -1
[Link] = -1

def enqueue(self, item):


# 1. Check if Queue is Full
if ([Link] + 1) % [Link] == [Link]:
print("Queue is Full! (Overflow)")
return

# 2. Check if Queue is Empty (First insertion)


if [Link] == -1:
[Link] = 0
[Link] = 0
else:
# 3. Move Rear circularly
[Link] = ([Link] + 1) % [Link]

# 4. Insert Element
[Link][[Link]] = item
print(f"Enqueued: {item}")

def dequeue(self):
# 1. Check if Queue is Empty
if [Link] == -1:
print("Queue is Empty! (Underflow)")
return

# 2. Retrieve item
item = [Link][[Link]]

# 3. Check if this was the last element


if [Link] == [Link]:
[Link] = -1
[Link] = -1
else:
# 4. Move Front circularly
[Link] = ([Link] + 1) % [Link]

print(f"Dequeued: {item}")
return item

def display(self):
if [Link] == -1:
print("Queue is Empty")
return

print("Current Queue State:", end=" ")


# Logic to print from Front to Rear circularly
if [Link] >= [Link]:
for i in range([Link], [Link] + 1):
print([Link][i], end=" ")
else:
# Print from front to end, then from 0 to rear
for i in range([Link], [Link]):
print([Link][i], end=" ")
for i in range(0, [Link] + 1):
print([Link][i], end=" ")
print()

# --- Driver Code ---


cq = CircularQueue(5) # Create queue of size 5

[Link](10)
[Link](20)
[Link](30)
[Link](40)
[Link](50)

[Link]() # Removes 10 (Front moves forward)


[Link]() # Removes 20

[Link](60) # Success! (Fills the space left by 10)


[Link]()

5. Trace (Dry Run)

Let's assume a Queue of Size 5.

Operation Front Rear Queue Array [0, 1, 2, 3, 4] Comment

[None, None, None, None,


Start -1 -1 Empty
None]

[10, None, None, None,


Enqueue(10) 0 0 First element
None]

Enqueue(20) 0 1 [10, 20, None, None, None] Normal add

... Enqueue(30,40,50) 0 4 [10, 20, 30, 40, 50] Queue FULL

Enqueue(60) 0 4 [10, 20, 30, 40, 50] Error: Overflow

10 removed. Space at index 0 is


Dequeue() 1 4 [None, 20, 30, 40, 50]
free.

Enqueue(60) 1 0 [60, 20, 30, 40, 50] Rear wraps to 0. 60 inserted.


6. Advantages over Linear Queue
Feature Linear Queue (Array) Circular Queue

Inefficient. Deleted spaces cannot be Efficient. Deleted spaces are reused


Memory Usage
reused. immediately.

Overflow Happens when Rear reaches End, even Happens only when the Queue is genuinely
Condition if Front is empty. full (Count = Size).

Complexity Time: $O(1)$, Space: $O(N)$ Time: $O(1)$, Space: $O(N)$

UNIT – 4
PART A (2 Marks) - Short Answer Questions

Q1. Define a Binary Tree. How is it different from a general Tree?

 Binary Tree: A tree data structure where each node has at most two children,
referred to as the left child and the right child.
 Difference: A general tree can have any number of children per node, whereas a
binary tree is restricted to a maximum of two.

Q2. List the three types of Tree Traversals.

1. In-order: Left Subtree $\rightarrow$ Root $\rightarrow$ Right Subtree.


2. Pre-order: Root $\rightarrow$ Left Subtree $\rightarrow$ Right Subtree.
3. Post-order: Left Subtree $\rightarrow$ Right Subtree $\rightarrow$ Root.

Q3. Define a Binary Search Tree (BST).

A BST is a binary tree with the following properties:

 The value of the left child is less than the parent's value.
 The value of the right child is greater than the parent's value.
 This property applies to every node in the tree.

Q4. What is an AVL Tree? What is the Balance Factor?

 AVL Tree: A self-balancing Binary Search Tree where the difference between
heights of left and right subtrees cannot be more than 1 for all nodes.
 Balance Factor (BF): $BF = Height(Left Subtree) - Height(Right Subtree)$. Allowed
values are $\{-1, 0, 1\}$.

Q5. Differentiate between a Min-Heap and a Max-Heap.


 Min-Heap: The key at the root is the minimum among all keys, and this property is
true for all sub-trees (Parent $\le$ Children).
 Max-Heap: The key at the root is the maximum, and this property is true for all sub-
trees (Parent $\ge$ Children).

Q6. What is a "Complete Binary Tree"?

A binary tree where all levels are completely filled except possibly the last level, which is
filled from left to right. This structure is essential for Heap implementation using arrays.

Q7. What are the applications of Trees?

1. File Systems: To represent directory structures.


2. Compilers: Expression trees are used to parse expressions.
3. Databases: B-Trees are used for indexing data.
4. Networking: Routing algorithms.

Q8. Define a Multiway Search Tree.

A tree where each node can hold more than one key and can have more than two children. A
common example is a B-Tree of order $m$, where a node can have up to $m$ children and
$m-1$ keys.

PART B (13 Marks) - Descriptive & Algorithm Questions

1: Explain the three Tree Traversal techniques (Inorder, Preorder, Postorder) with
recursive algorithms and an example diagram.
1. Introduction to Tree Traversal

Tree Traversal is the process of visiting every node in a tree data structure exactly once. Unlike linear
data structures (Arrays, Linked Lists) where there is only one way to traverse (start to end), trees can
be traversed in different ways.

The three most common Depth-First Search (DFS) traversals are:

1. Inorder Traversal (Left $\rightarrow$ Root $\rightarrow$ Right)


2. Preorder Traversal (Root $\rightarrow$ Left $\rightarrow$ Right)
3. Postorder Traversal (Left $\rightarrow$ Right $\rightarrow$ Root)

2. The Example Tree

Let us consider the following Binary Tree for all our examples.
Structure:

 Root: A
 Left Subtree: B (with children D, E)
 Right Subtree: C (no children)

3. Inorder Traversal (Left - Root - Right)

In this traversal, we visit the left child first, then the root, and finally the right child.

Algorithm Steps:

1. Recursively traverse the Left subtree.


2. Visit (Print) the Root node.
3. Recursively traverse the Right subtree.

Python Implementation:

Python
def inorder_traversal(root):
if root:
# Step 1: Recur on Left
inorder_traversal([Link])

# Step 2: Visit Node


print([Link], end=" ")

# Step 3: Recur on Right


inorder_traversal([Link])

Trace (Dry Run):

 Start at A. Go Left to B.
 At B, Go Left to D.
 At D (Leaf), Left is None $\rightarrow$ Print D $\rightarrow$ Right is None. Return to B.
 Back at B $\rightarrow$ Print B $\rightarrow$ Go Right to E.
 At E (Leaf) $\rightarrow$ Print E. Return to B, then return to A.
 Back at A $\rightarrow$ Print A $\rightarrow$ Go Right to C.
 At C (Leaf) $\rightarrow$ Print C.

Output: D B E A C

 Note: In a Binary Search Tree (BST), Inorder traversal always gives sorted output.

4. Preorder Traversal (Root - Left - Right)

In this traversal, we visit the root first, then the left child, and finally the right child.

Algorithm Steps:

1. Visit (Print) the Root node.


2. Recursively traverse the Left subtree.
3. Recursively traverse the Right subtree.

Python Implementation:

Python
def preorder_traversal(root):
if root:
# Step 1: Visit Node
print([Link], end=" ")

# Step 2: Recur on Left


preorder_traversal([Link])

# Step 3: Recur on Right


preorder_traversal([Link])

Trace (Dry Run):

 Start at A $\rightarrow$ Print A.


 Go Left to B $\rightarrow$ Print B.
 Go Left to D $\rightarrow$ Print D. (Left/Right done). Return to B.
 Go Right to E $\rightarrow$ Print E. (Left/Right done). Return to A.
 Go Right to C $\rightarrow$ Print C.

Output: A B D E C

 Application: Used to create a copy of the tree.

5. Postorder Traversal (Left - Right - Root)

In this traversal, we visit the left child first, then the right child, and finally the root. The root is
always visited last.

Algorithm Steps:

1. Recursively traverse the Left subtree.


2. Recursively traverse the Right subtree.
3. Visit (Print) the Root node.

Python Implementation:

Python
def postorder_traversal(root):
if root:
# Step 1: Recur on Left
postorder_traversal([Link])

# Step 2: Recur on Right


postorder_traversal([Link])

# Step 3: Visit Node


print([Link], end=" ")
Trace (Dry Run):

 Start at A. Go Left to B.
 At B, Go Left to D.
 At D (Leaf) $\rightarrow$ Print D. Return to B.
 At B, Go Right to E.
 At E (Leaf) $\rightarrow$ Print E. Return to B.
 Back at B (Left/Right done) $\rightarrow$ Print B. Return to A.
 At A, Go Right to C.
 At C (Leaf) $\rightarrow$ Print C. Return to A.
 Back at A (Left/Right done) $\rightarrow$ Print A.

Output: D E B C A

 Application: Used to delete the tree (delete children before deleting parent).

6. Comparison Summary Table


Feature Inorder Preorder Postorder

Left $\rightarrow$ Root $\ Root $\rightarrow$ Left $\ Left $\rightarrow$ Right $\


Sequence
rightarrow$ Right rightarrow$ Right rightarrow$ Root

Root Position Middle First Last

Output
DBEAC ABDEC DEBCA
(Example)

Getting sorted data from Copying Trees, Expression Deleting Trees, Expression
Primary Use
BST Prefix Postfix

2: Explain Binary Search Tree (BST) ADT. Discuss the Insert and Delete
operations with examples.
1. Definition: What is a Binary Search Tree (BST)?

A Binary Search Tree (BST) is a special type of Binary Tree that maintains a sorted order
of elements. It allows for efficient Searching, Insertion, and Deletion operations.

Properties of BST:

For every node in the tree:

1. Left Subtree: All values in the left child (and its subtrees) are smaller than the parent
node.
2. Right Subtree: All values in the right child (and its subtrees) are greater than the
parent node.
3. No Duplicates: Typically, BSTs do not allow duplicate values.

Shutterstock

2. Operation 1: Insertion

To insert a new value into a BST, we must find the correct spot so that the BST properties
(Left < Root < Right) are maintained.

Algorithm:

1. Start at the Root.


2. Compare the New Value with the Current Node.
3. If New Value < Current Node: Move to the Left child.
4. If New Value > Current Node: Move to the Right child.
5. Repeat steps 2-4 until you reach an empty spot (None).
6. Insert the new node at that empty spot.

Example Trace:

Current Tree: Root = 50, Left = 30, Right = 70.

Task: Insert 60.

1. Compare 60 with Root (50). $60 > 50$, so go Right.


2. Current Node is 70.
3. Compare 60 with 70. $60 < 70$, so go Left.
4. Left of 70 is Empty. Insert 60 here.

3. Operation 2: Deletion (The Critical Part)

Deletion is more complex than insertion because removing a node might break the tree
structure. We handle this in three distinct cases.

Case 1: Deleting a Leaf Node (No Children)


 Scenario: The node to be deleted has no left or right child.
 Action: Simply remove the node (set the pointer from its parent to None).
 Example: Deleting 20 from the tree below.

Plaintext

50 50
/ \ / \
30 70 ---> 30 70
/
20 (Delete this)

Case 2: Deleting a Node with One Child

 Scenario: The node has only one child (either Left or Right).
 Action: Bypass the node. Make the parent of the node point directly to the node's
single child.
 Example: Deleting 30 (which has child 20). Parent 50 now connects directly to 20.

Case 3: Deleting a Node with Two Children

 Scenario: The node has both Left and Right children.


 Action:
1. Find the Inorder Successor of the node.
 Definition: The smallest value in the Right Subtree.
2. Copy the Inorder Successor's value to the node you wanted to delete.
3. Delete the Inorder Successor node (which will now be a Case 1 or Case 2
deletion).

Detailed Example for Case 3:

Tree:

Plaintext
50
/ \
30 70
/ \
60 80

Task: Delete 50 (Root).

1. Identify Case: 50 has two children (30 and 70).


2. Find Successor: Go to Right Subtree (70), then find the smallest value. The smallest
value is 60 (Left child of 70).
3. Replace: Copy 60 to the Root. The Root is now 60.
4. Delete Successor: Now delete the original 60 from its old position (leaf node).

Result:

Plaintext
60
/ \
30 70
\
80

4. Python Implementation (Pseudocode Style)

To secure the "Algorithm" marks, write this simplified logic:

Python
class Node:
def __init__(self, key):
[Link] = None
[Link] = None
[Link] = key

def insert(root, key):


if root is None:
return Node(key)

if key < [Link]:


[Link] = insert([Link], key)
else:
[Link] = insert([Link], key)
return root

def deleteNode(root, key):


# Base Case
if root is None: return root

# 1. Find the node


if key < [Link]:
[Link] = deleteNode([Link], key)
elif key > [Link]:
[Link] = deleteNode([Link], key)

# 2. Node found! Handle 3 Cases


else:
# Case 1 & 2: No child or One child
if [Link] is None:
return [Link]
elif [Link] is None:
return [Link]

# Case 3: Two children


# Find Inorder Successor (smallest in right subtree)
temp = minValueNode([Link])
[Link] = [Link]
# Delete the successor
[Link] = deleteNode([Link], [Link])
return root

5. Complexity Analysis

 Time Complexity:
o Best/Average Case: $O(\log n)$ (Because at every step, we eliminate half the
tree).
o Worst Case: $O(n)$ (If the tree is skewed, looking like a linked list).
 Space Complexity: $O(h)$ where $h$ is the height of the tree (for recursion stack).

3: Explain Min-Heap and Max-Heap operations. Discuss how Heaps are


used in Priority Queues.
1. Definition of a Binary Heap

A Heap is a specialized tree-based data structure that satisfies two specific properties:

1. Shape Property: It must be a Complete Binary Tree. This means all levels are
completely filled except possibly the last level, which is filled from left to right.
2. Heap Property: It must satisfy a specific ordering between parent and children nodes
(either Min or Max).

2. Types of Heaps

A. Max-Heap

In a Max-Heap, the value of every Parent node is greater than or equal to the values of its
Children.

 Root: The root contains the Maximum element of the entire tree.
 Rule: $Parent \ge Children$

B. Min-Heap

In a Min-Heap, the value of every Parent node is less than or equal to the values of its
Children.

 Root: The root contains the Minimum element of the entire tree.
 Rule: $Parent \le Children$

3. Heap Operations (Explained using Max-Heap)

There are two main operations: Insertion and Deletion (Extract Max).

Operation 1: Insertion (Heapify Up)

When we add a new element, we must maintain the Complete Binary Tree shape first, and
then fix the Heap Order.
Algorithm:

1. Add the new key at the end of the tree (last available position in the array).
2. Compare the new key with its Parent.
3. Swap: If the new key is greater than its parent (in Max-Heap), swap them.
4. Repeat: Continue moving up the tree until the property is satisfied or the root is
reached.

Example: Insert 60 into a Max-Heap.

 Step 1: Add 60 to the bottom-right.


 Step 2: 60 > Parent (say 40)? Yes. Swap.
 Step 3: 60 > New Parent (say 50)? Yes. Swap.
 Result: 60 finds its correct place.

Operation 2: Deletion / Extract Max (Heapify Down)

In a Max-Heap, we typically only remove the Root (the maximum element).

Algorithm:

1. Remove the Root (Max element).


2. Replace the Root with the Last Element in the tree (bottom-most, right-most node).
3. Compare the new Root with its children.
4. Swap: If the new Root is smaller than the larger of its two children, swap it with that
child.
5. Repeat: Continue moving down the tree until the property is restored or a leaf is
reached.

Example: Delete Root (100).

 Step 1: Remove 100.


 Step 2: Move last element (say 10) to the Root position.
 Step 3: 10 < Child (say 80)? Yes. Swap 10 and 80.
 Step 4: Continue swapping down until 10 is in a valid spot.

4. Heaps in Priority Queues

A Priority Queue is an abstract data type where each element has a "priority". In a standard
Queue (FIFO), elements leave in the order of arrival. In a Priority Queue, the element with
the highest priority is served first.

Why use Heaps for Priority Queues?

We could use Arrays or Linked Lists, but Heaps are much more efficient:

Insert Delete Max


Data Structure Drawback
Operation Operation

Unsorted $O(1)$ $O(n)$ Searching for Max is too slow.


Insert Delete Max
Data Structure Drawback
Operation Operation

Array

Shifting elements during Insert is too


Sorted Array $O(n)$ $O(1)$
slow.

Binary Heap $O(\log n)$ $O(\log n)$ Perfect Balance!

How it works:

1. Enqueue (Insert): We use the Heap Insertion algorithm. This ensures the highest
priority item bubbles up to the root efficiently.
2. Dequeue (Remove): We use the Heap Deletion (Extract Root) algorithm. This
instantly gives us the highest priority item and reorganizes the rest efficiently.

Real-world Applications:

 CPU Scheduling: Processes with higher priority (system tasks) are executed before
lower priority ones (user apps).
 Dijkstra's Algorithm: Used to find the shortest path in graph algorithms.

5. Complexity Analysis
Operation Time Complexity Reason

Build Heap $O(n)$ Special tight bound analysis.

Insert $O(\log n)$ Height of the tree is $\log n$.

Delete Max $O(\log n)$ We traverse down the height of the tree.

Find Max $O(1)$ It is always at the Root.

Here is the elaborated answer for Q4: Explain AVL Tree Rotations with clear diagrams.

This is a diagram-heavy question. In an exam, the text can be brief, but the diagrams must
be perfect to score the full 13 marks.
4: Explain AVL Tree Rotations with clear diagrams.
1. What is an AVL Tree?

An AVL tree (named after Adelson-Velsky and Landis) is a self-balancing Binary Search
Tree (BST). It ensures the tree remains balanced to guarantee $O(\log n)$ search time.

The Balance Condition:

For every node in the tree, the difference between the height of the left subtree and the height
of the right subtree must be -1, 0, or +1.

$$Balance Factor (BF) = Height(Left) - Height(Right)$$

If the Balance Factor of any node becomes +2 or -2, the tree is "unbalanced," and we perform
Rotations to fix it.

2. Types of Rotations

There are four types of rotations depending on where the new node was inserted:

1. LL Rotation (Single Rotation)


2. RR Rotation (Single Rotation)
3. LR Rotation (Double Rotation)
4. RL Rotation (Double Rotation)

3. LL Rotation (Left-Left Case)

When to use: When a new node is inserted into the Left child of the Left subtree of a node
that becomes critical.

 Problem: The tree is "Left Heavy" (BF = +2).


 Solution: Perform a Right Rotation.

Diagrammatic Explanation:

Imagine Node A is unbalanced because of the left side.

 Before: C is left child of B, and B is left child of A. (Line: A-B-C).


 Action: Pull B up. A moves down to the right.
 After: B becomes the new root. C is left child, A is right child.

4. RR Rotation (Right-Right Case)

When to use: When a new node is inserted into the Right child of the Right subtree.
 Problem: The tree is "Right Heavy" (BF = -2).
 Solution: Perform a Left Rotation.

Diagrammatic Explanation:

Imagine Node A is unbalanced.

 Before: C is right child of B, and B is right child of A. (Line: A-B-C).


 Action: Pull B up. A moves down to the left.
 After: B becomes the new root. A is left child, C is right child.

5. LR Rotation (Left-Right Case)

When to use: When a new node is inserted into the Right child of the Left subtree.

 Problem: The path is "Zig-Zag" (Left then Right). Single rotation won't fix it.
 Solution: Double Rotation.
1. Left Rotate the child node (to convert it into an LL case).
2. Right Rotate the critical node (to solve the LL case).

Example:

 Before: Root is C. Left child is A. Right child of A is B. (Structure: C-A-B).


 Step 1 (Left Rotate A): Moves B up. Now we have a straight line (C-B-A). This is
now an LL case.
 Step 2 (Right Rotate C): Moves B up to Root. A is left, C is right.

6. RL Rotation (Right-Left Case)

When to use: When a new node is inserted into the Left child of the Right subtree.

 Problem: The path is "Zig-Zag" (Right then Left).


 Solution: Double Rotation.
1. Right Rotate the child node (to convert it into an RR case).
2. Left Rotate the critical node (to solve the RR case).

Example:

 Before: Root is A. Right child is C. Left child of C is B. (Structure: A-C-B).


 Step 1 (Right Rotate C): Moves B up. Now we have a straight line (A-B-C). This is
now an RR case.
 Step 2 (Left Rotate A): Moves B up to Root. A is left, C is right.

Summary Table for Exam (Quick Reference):


Problem Area Imbalance Type Rotation Required Visual Action

Left of Left Child LL Case Single Right Rotate Pull middle up, push root right.

Right of Right Child RR Case Single Left Rotate Pull middle up, push root left.

Right of Left Child LR Case Left then Right Straighten Zig-Zag, then fix.

Left of Right Child RL Case Right then Left Straighten Zig-Zag, then fix.

PART C (15 Marks) - Descriptive Questions

Question:

Construct an AVL Tree by inserting the following elements in sequence: 10, 20, 30, 40,
50, 25.

Answer:

1. Introduction

An AVL Tree is a self-balancing Binary Search Tree (BST) where the difference between
the heights of left and right subtrees (Balance Factor) cannot be more than 1 for all nodes.

 Balance Factor (BF) = Height(Left Subtree) - Height(Right Subtree)


 Permissible BF values: $\{-1, 0, 1\}$.
 If $BF \notin \{-1, 0, 1\}$, rotations are performed to restore balance.

2. Step-by-Step Construction

Step 1: Insert 10

 Create the root node.


 Tree:

Plaintext

10 (BF=0)

Step 2: Insert 20

 $20 > 10$, so insert 20 as the right child of 10.


 BF Calculation:
o Node 10: $H(Left) - H(Right) = 0 - 1 = -1$ (Balanced)
 Tree:

Plaintext

10 (-1)
\
20 (0)

Step 3: Insert 30

 $30 > 10$, $30 > 20$. Insert 30 as right child of 20.
 BF Calculation:
o Node 20: $0 - 1 = -1$
o Node 10: $0 - 2 = -2$ (Unbalanced)
 Imbalance Type: The imbalance is in the Right child of the Right subtree (RR
Case).
 Action: Perform Single Left Rotation (LL Rotation) on node 10.
o Node 20 moves up to become the root.
o Node 10 becomes the left child of 20.
 Resulting Tree:

Plaintext

20 (0)
/ \
10 (0) 30 (0)

Step 4: Insert 40

 $40 > 20$, $40 > 30$. Insert 40 as right child of 30.
 BF Calculation:
o Node 30: $0 - 1 = -1$
o Node 20: $1 - 2 = -1$
 Status: The tree is Balanced.
 Tree:

Plaintext

20 (-1)
/ \
10 30 (-1)
\
40 (0)

Step 5: Insert 50

 $50 > 20$, $50 > 30$, $50 > 40$. Insert 50 as right child of 40.
 BF Calculation:
o Node 40: $0 - 1 = -1$
o Node 30: $0 - 2 = -2$ (Unbalanced)
 Imbalance Type: The imbalance is in the Right child of the Right subtree of node 30
(RR Case).
 Action: Perform Single Left Rotation on node 30.
o Node 40 moves up.
o Node 30 becomes the left child of 40.
o Node 50 remains the right child of 40.
 Resulting Tree:

Plaintext

20 (-1)
/ \
10 40 (0)
/ \
30 50

Step 6: Insert 25

 Compare with root 20: $25 > 20$ (Go Right).


 Compare with 40: $25 < 40$ (Go Left).
 Compare with 30: $25 < 30$ (Insert Left).
 Tree before Rotation:

Plaintext

20 (-2) <-- Critical Node (Unbalanced)


/ \
10 40 (1)
/ \
30 50
/
25

 BF Calculation:
o Node 30: $1 - 0 = 1$
o Node 40: $H(Left=2) - H(Right=1) = 1$
o Node 20: $H(Left=1) - H(Right=3) = -2$ (Unbalanced)
 Imbalance Type: The insertion happened in the Right subtree of 20, and then in the
Left subtree of 40. This is an RL Case (Right-Left Case).
 Action: Perform Double Rotation (RL Rotation).

1. Step A (Right Rotation on 40): Move 30 up, 40 down to the right.


2. Step B (Left Rotation on 20): Move 30 up to root, 20 down to the left.

o Intermediate (After Right Rotate on 40):

Plaintext

20
\
30
/ \
25 40
\
50

o Final (After Left Rotate on 20):

Plaintext

30
/ \
20 40
/ \ \
10 25 50

3. Final Output

The final balanced AVL tree structure is:

Plaintext
30
/ \
20 40
/ \ \
10 25 50

Verification:

 BF(10) = 0
 BF(25) = 0
 BF(50) = 0
 BF(20) = $1 - 1 = 0$
 BF(40) = $0 - 1 = -1$
 BF(30) = $2 - 2 = 0$
 Conclusion: All nodes satisfy the AVL property $|BF| \le 1$. The tree is balanced.

Question:

Explain the properties and insertion operation of a B-Tree of Order 5.

Answer:

1. Introduction to B-Tree

A B-Tree is a self-balancing search tree designed to work well on magnetic disks or other
direct-access secondary storage devices.1 It generalizes the binary search tree concept,
allowing nodes to have more than two children.2
2. Properties of a B-Tree of Order 5

Let $m$ be the order of the B-Tree. Here, $m = 5$.

The properties for this specific tree are derived as follows:

1. Maximum Children: Every node can have at most 3$m$ children.4


o $Max = 5$ children.
2. Minimum Children: Every non-leaf node (except the root) must have at least $\lceil
m/2 \rceil$ children.
o $Min = \lceil 5/2 \rceil = 3$ children.
3. Maximum Keys: Every node can contain at most 5$m-1$ keys.6
o $Max Keys = 5 - 1 = 4$ keys.
4. Minimum Keys: Every non-root node must contain at least $\lceil m/2 \rceil - 1$
keys.
o $Min Keys = 3 - 1 = 2$ keys.
5. Root Node: The root must have at least 2 children (unless it is a leaf).7
6. Leaf Nodes: All leaf nodes must appear at the same level (perfectly balanced).8
7. Ordering: Keys in a node are sorted in increasing order.

3. Insertion Algorithm

The insertion operation in a B-Tree is performed using the following logic:

1. Search: Traverse the tree to find the appropriate leaf node where the key should be
inserted.
2. Insert: Insert the key into the leaf node in sorted order.
3. Check for Overflow:
o If the node contains $\le m-1$ keys (i.e., $\le 4$ keys), the operation is
complete.
o If the node contains 9$m$ keys (i.e., 5 keys), an Overflow occurs.10
4. Split Operation (Handling Overflow):
o The node is split into two nodes.
o The median key (the middle element) is promoted to the parent node.11
o The keys smaller than the median go to the left new node, and keys larger go
to the right new node.
o If the parent becomes full, the split propagates upward (potentially splitting
the root).12

4. Step-by-Step Construction (Example Trace)

Task: Construct a B-Tree of Order 5 by inserting numbers: 10, 20, 30, 40, 50, 60, 70, 80, 90.

Step 1: Insert 10, 20, 30, 40

 The root is empty. We insert keys in sorted order.


 The maximum capacity is 4 keys.
 Tree:
Plaintext

[ 10, 20, 30, 40 ]

Step 2: Insert 50 (Overflow & Split)

 Insert 50 into the node: [10, 20, 30, 40, 50].


 Issue: The node now has 5 keys, which exceeds the limit (Max 4).
 Action: Split the node.
o Median: The middle element is 30.
o Promote: 30 becomes the new root.
o Split: 10, 20 become the Left Child. 40, 50 become the Right Child.
 Tree:

Plaintext

[ 30 ]
/ \
[10, 20] [40, 50]

Step 3: Insert 60, 70

 $60 > 30$, go to right child [40, 50]. Insert 60 $\rightarrow$ [40, 50, 60].
 $70 > 30$, go to right child. Insert 70 $\rightarrow$ [40, 50, 60, 70].
 The right node is now full (4 keys), but valid.
 Tree:

Plaintext

[ 30 ]
/ \
[10, 20] [40, 50, 60, 70]

Step 4: Insert 80 (Overflow & Split)

 $80 > 30$, go to right child.


 Try to insert 80: [40, 50, 60, 70, 80].
 Issue: Node has 5 keys (Overflow).
 Action: Split the right child.
o Median: The middle element is 60.
o Promote: 60 moves up to the parent (Root).
o Split: 40, 50 stay in left split. 70, 80 go to right split.
 The Root [30] receives 60 and becomes [30, 60].
 Tree:

Plaintext

[ 30, 60 ]
/ | \
[10, 20] [40, 50] [70, 80]

Step 5: Insert 90
 $90 > 60$, go to the rightmost child [70, 80].
 Insert 90: [70, 80, 90].
 Node size is 3 (Valid).
 Final Tree:

Plaintext

[ 30, 60 ]
/ | \
[10, 20] [40, 50] [70, 80, 90]

5. Summary of Final Structure

 Root: Contains keys 30, 60. It has 3 children.


 Children:
1. First child contains 10, 20.
2. Second child contains 40, 50.
3. Third child contains 70, 80, 90.
 Validation:

o Root has 2 keys (Valid, $\le 4$).


o Root has 3 children (Valid, $\ge 2$).
o All leaf nodes are at the same level.
o All leaf nodes have between 2 and 4 keys.

UNIT – 3

PART A (2 Marks) - Short Answer Questions

1. What is the difference between Linear Search and Binary Search?

 Linear Search: Scans elements sequentially. Works on both sorted and unsorted lists.
Time Complexity: $O(n)$.
 Binary Search: Divide and conquer approach. Works only on sorted lists. Time
Complexity: $O(\log n)$.

2. Define Hashing and Hash Function.

 Hashing: A technique to convert a range of key values into a range of indexes of an


array (hash table) for faster access.
 Hash Function: A mathematical formula used to map keys to hash table indices.
Example: $h(k) = k \mod size$.

3. What is a Collision in Hashing?


A collision occurs when two distinct keys generate the same hash value (index) using the
hash function. i.e., $h(k1) = h(k2)$ where $k1 \neq k2$.

4. List the time complexities of Quick Sort.

 Best Case: $O(n \log n)$ (Balanced partitioning).


 Average Case: $O(n \log n)$.
 Worst Case: $O(n^2)$ (When the array is already sorted or reverse sorted, and pivot
is always the smallest/largest element).

5. Define Load Factor in Hashing.

The load factor ($\lambda$) is the ratio of the number of elements stored in the hash table to
the total size of the table.

$$\lambda = \frac{\text{Number of elements}}{\text{Table Size}}$$

6. What is Rehashing?

Rehashing is the process of increasing the size of the hash table (usually doubling it) and re-
inserting all existing elements into the new table when the load factor exceeds a certain
threshold.

7. Differentiate between Stable and Unstable Sorting.

 Stable Sort: Preserves the relative order of equal elements. Example: Merge Sort,
Insertion Sort.
 Unstable Sort: Does not guarantee the order of equal elements. Example: Quick Sort,
Selection Sort.

8. What is the advantage of Merge Sort over Quick Sort?

Merge Sort guarantees $O(n \log n)$ time complexity in the worst case, whereas Quick Sort
can degrade to $O(n^2)$. However, Merge Sort requires extra space ($O(n)$).

PART B (13 Marks) - Descriptive Questions

Q1. Explain the Quick Sort algorithm with an illustrative example. Discuss its
complexity.

Answer:

1. Introduction:

Quick Sort is a highly efficient sorting algorithm based on the Divide and Conquer paradigm.
It works by selecting a 'pivot' element from the array and partitioning the other elements into
two sub-arrays, according to whether they are less than or greater than the pivot.

2. Algorithm (Steps):

1. Choose a Pivot: Pick an element from the array to serve as the pivot (commonly the
first, last, or middle element).
2. Partitioning: Reorder the array so that all elements with values less than the pivot
come before the pivot, and all elements with values greater than the pivot come after
it. After this partitioning, the pivot is in its final position.
3. Recursive Sorting: Recursively apply the above steps to the sub-array of elements
with smaller values and the sub-array of elements with greater values.

3. Illustrative Example:

Input Array: [10, 80, 30, 90, 40, 50, 70]

Method: Let's use the Last Element as the Pivot.

 Pass 1:
o Pivot: 70
o Pointer i: Starts at -1 (tracks elements smaller than pivot).
o Pointer j: Scans from index 0 to 5.
o Comparison:
 10 < 70: Increment i, swap (no change). Array: [10, 80, 30...]
 80 > 70: Do nothing.
 30 < 70: Increment i, swap 80 and 30. Array: [10, 30, 80, 90, 40...]
 90 > 70: Do nothing.
 40 < 70: Increment i, swap 80 and 40. Array: [10, 30, 40, 90, 80, 50,
70]
 50 < 70: Increment i, swap 90 and 50. Array: [10, 30, 40, 50, 80, 90,
70]
o Final Step: Swap Pivot (70) with i+1 (80).
o Result after Partition: [10, 30, 40, 50, **70**, 90, 80]
o (Pivot 70 is now at its correct sorted position).
 Pass 2 (Recursion):
o Left Sub-array: [10, 30, 40, 50] (Already sorted in this case, but recursion
continues).
o Right Sub-array: [90, 80] $\rightarrow$ Pivot 80. Swap 90, 80 $\rightarrow$
[80, 90].
 Final Sorted Array: [10, 30, 40, 50, 70, 80, 90]

4. Complexity Analysis:

 Best Case: $O(n \log n)$. Occurs when the pivot always divides the array into two
nearly equal halves.
 Average Case: $O(n \log n)$.
 Worst Case: $O(n^2)$. Occurs when the array is already sorted (ascending or
descending) and the pivot is always the smallest or largest element, creating highly
unbalanced partitions.

Q2. Describe the Collision Resolution techniques in Hashing.

Answer:

1. Introduction:
In hashing, a Collision occurs when the hash function maps two distinct keys to the same
index in the hash table (i.e., $h(k_1) = h(k_2)$). Collision resolution techniques are methods
to handle this scenario.

2. Technique 1: Separate Chaining (Open Hashing)

 Concept: Each slot (index) in the hash table contains a pointer to a Linked List. All
keys that hash to the same index are stored in that linked list.
 Operation:
o Insert: Calculate hash index. Add the key to the linked list at that index.
o Search: Calculate hash index. Traverse the linked list at that index to find the
key.
 Advantage: Simple to implement; table never fills up (lists just get longer).
 Disadvantage: Requires extra memory for pointers; search time increases if chains
become long ($O(n)$ in worst case).

3. Technique 2: Open Addressing (Closed Hashing)

 Concept: All elements are stored within the hash table array itself. If a collision
occurs, we probe (search) for the next empty slot using a specific rule.
 Types of Probing:
o A. Linear Probing:
 We linearly search for the next empty slot.
 Function: $Index = (h(k) + i) \pmod{size}$, where $i = 0, 1, 2...$
 Drawback: Primary Clustering (clusters of occupied slots merge,
increasing search time).
o B. Quadratic Probing:
 We search for slots based on a quadratic equation to reduce clustering.
 Function: $Index = (h(k) + c_1 i + c_2 i^2) \pmod{size}$
 Drawback: Secondary Clustering (keys hashing to the same initial
position follow the same probe sequence).
o C. Double Hashing:
 Uses a second independent hash function to determine the step size.
 Function: $Index = (h_1(k) + i \times h_2(k)) \pmod{size}$
 Advantage: Drastically reduces clustering; considered one of the best
open addressing methods.

Q3. Explain Merge Sort algorithm. Show the trace of Merge Sort for the data: 38, 27,
43, 3, 9, 82, 10.

Answer:

1. Algorithm:

Merge Sort is a stable sorting algorithm that uses the Divide and Conquer strategy.

1. Divide: Find the middle point of the array to divide it into two halves.
2. Conquer: Recursively call Merge Sort for the first half and the second half.
3. Combine (Merge): Merge the two sorted halves into a single sorted array.

2. Trace for Input: [38, 27, 43, 3, 9, 82, 10]


 Step 1: Divide
o [38, 27, 43, 3] vs [9, 82, 10]
 Step 2: Divide again
o Left: [38, 27] vs [43, 3]
o Right: [9, 82] vs [10]
 Step 3: Divide to single elements
o [38], [27], [43], [3], [9], [82], [10]
 Step 4: Merge (Start combining back up)
o Merge [38] and [27] $\rightarrow$ Sorted: [27, 38]
o Merge [43] and [3] $\rightarrow$ Sorted: [3, 43]
o Merge [9] and [82] $\rightarrow$ Sorted: [9, 82]
o [10] remains [10]
 Step 5: Merge Sub-arrays
o Merge [27, 38] and [3, 43]:
 Compare 27, 3 $\rightarrow$ Take 3.
 Compare 27, 43 $\rightarrow$ Take 27.
 Compare 38, 43 $\rightarrow$ Take 38.
 Take 43.
 Result: [3, 27, 38, 43]
o Merge [9, 82] and [10]:
 Compare 9, 10 $\rightarrow$ Take 9.
 Compare 82, 10 $\rightarrow$ Take 10.
 Take 82.
 Result: [9, 10, 82]
 Step 6: Final Merge
o Merge [3, 27, 38, 43] and [9, 10, 82]:
 Take 3, Take 9, Take 10, Take 27, Take 38, Take 43, Take 82.
o Final Output: [3, 9, 10, 27, 38, 43, 82]

3. Complexity:

 Time Complexity: $O(n \log n)$ in all cases (Best, Average, Worst).
 Space Complexity: $O(n)$ (Requires auxiliary array for merging).

Q4. Explain Binary Search Algorithm with code/pseudocode and analyze its efficiency.

Answer:

1. Introduction:

Binary Search is an efficient searching algorithm used on sorted arrays. It works by


repeatedly dividing the search interval in half.

2. Algorithm Logic:

1. Compare the target value x with the middle element of the array.
2. If x matches the middle element, return the index.
3. If x is greater than the middle element, ignore the left half and recurse on the right
half.
4. If x is smaller, ignore the right half and recurse on the left half.

3. Pseudocode (Iterative):
Python
Function BinarySearch(arr, target):
low = 0
high = length(arr) - 1

while low <= high:


mid = (low + high) // 2 # Find middle index

if arr[mid] == target:
return mid # Element found

elif arr[mid] < target:


low = mid + 1 # Search right half

else:
high = mid - 1 # Search left half

return -1 # Element not found

4. Efficiency Analysis:

 Iteration 1: Length of array = $n$


 Iteration 2: Length = $n/2$
 Iteration 3: Length = $n/4$
 ...
 Iteration k: Length = $n / 2^k$

The search terminates when the array size becomes 1 ($n / 2^k = 1$), which implies $n =
2^k$.

Taking log on both sides:

$$\log_2 n = \log_2 (2^k)$$


$$k = \log_2 n$$

Therefore, the Time Complexity is $O(\log n)$. This is significantly faster than Linear
Search ($O(n)$) for large datasets.

Q5. Detailed Comparison of Sorting Algorithms

Answer:

1. Introduction

Sorting is the process of arranging data in a particular format, typically ascending or


descending order. Different sorting algorithms are used depending on the constraints of time
(speed), space (memory), and stability (preserving order of equal elements). We generally
categorize them into $O(n^2)$ algorithms (simple but slow) and $O(n \log n)$ algorithms
(complex but fast).

2. Comparison Criteria
To compare these algorithms effectively, we look at:

 Time Complexity: How the time taken grows as input size ($n$) increases.
 Space Complexity: How much extra memory is needed.
 Stability: Whether duplicate elements retain their original relative order.
 In-Place: Whether the sorting happens within the original array or requires a new
one.

3. Detailed Analysis of Algorithms

A. Bubble Sort

 Mechanism: Repeatedly swaps adjacent elements if they are in the wrong order.
Large elements "bubble" to the top.
 Best Case: $O(n)$ (When the array is already sorted).
 Worst Case: $O(n^2)$ (When the array is reverse sorted).
 Space: $O(1)$ (In-place).
 Stability: Stable.
 Usage: Rarely used in real-world applications due to inefficiency. Good for teaching
concepts.

B. Insertion Sort

 Mechanism: Builds the sorted array one item at a time. It picks an element and
inserts it into its correct position among the previously sorted elements.
 Best Case: $O(n)$ (Nearly sorted data).
 Worst Case: $O(n^2)$.
 Space: $O(1)$.
 Stability: Stable.
 Usage: efficient for small datasets ($n < 50$) or data that is already partially sorted.
It is often used as the base case for recursive algorithms like Quick Sort or Merge
Sort.

C. Selection Sort

 Mechanism: Repeatedly finds the minimum element from the unsorted part and puts
it at the beginning.
 Best/Worst Case: $O(n^2)$ (It always scans the remaining list, even if sorted).
 Space: $O(1)$.
 Stability: Unstable (Swapping long distances can disrupt order).
 Usage: Useful when memory writes are very expensive (it makes the minimum
number of swaps, $O(n)$).

D. Merge Sort

 Mechanism: A Divide-and-Conquer algorithm. Divides the array into halves, sorts


them recursively, and then merges the sorted halves.
 Time Complexity: $O(n \log n)$ in all cases (Best, Average, Worst).
 Space: $O(n)$ (Requires auxiliary array for merging).
 Stability: Stable.
 Usage: Preferred for Linked Lists (nodes can be moved without extra space) and
when stability is critical. Disadvantage is the extra memory requirement.

E. Quick Sort

 Mechanism: Divide-and-Conquer. Picks a "pivot" and partitions the array such that
smaller elements are on the left and larger on the right.
 Best/Avg Case: $O(n \log n)$.
 Worst Case: $O(n^2)$ (Rare, happens with poor pivot choice).
 Space: $O(\log n)$ (Stack space for recursion).
 Stability: Unstable.
 Usage: The de-facto standard for sorting arrays. It is generally faster than Merge
Sort in practice because it works in-place and has good cache locality.

F. Heap Sort

 Mechanism: Uses a Binary Heap data structure (Max-Heap). It builds a heap, then
repeatedly extracts the maximum element and moves it to the end.
 Time Complexity: $O(n \log n)$ in all cases.
 Space: $O(1)$ (In-place).
 Stability: Unstable.
 Usage: Great when you need guaranteed $O(n \log n)$ performance without the extra
memory overhead of Merge Sort. Used in systems with limited memory (embedded
systems).

4. Summary Table

Algorithm Best Time Avg Time Worst Time Space Stable? In-Place?

Bubble $O(n)$ $O(n^2)$ $O(n^2)$ $O(1)$ Yes Yes

Insertion $O(n)$ $O(n^2)$ $O(n^2)$ $O(1)$ Yes Yes

Selection $O(n^2)$ $O(n^2)$ $O(n^2)$ $O(1)$ No Yes

Merge $O(n \log n)$ $O(n \log n)$ $O(n \log n)$ $O(n)$ Yes No

Quick $O(n \log n)$ $O(n \log n)$ $O(n^2)$ $O(\log n)$ No Yes

Heap $O(n \log n)$ $O(n \log n)$ $O(n \log n)$ $O(1)$ No Yes

5. Conclusion
 For small arrays, Insertion Sort is fastest.
 For large general-purpose arrays, Quick Sort is preferred.
 If memory is tight, Heap Sort is the best $O(n \log n)$ option.
 If stability is required (e.g., sorting by name, then by grade), Merge Sort is the
choice

PART C (15 Marks) - Descriptive Questions

Q1. Problem Solving: Hashing

Question:

Insert the following keys into a Hash Table of size 10 using the Hash Function $h(k) = k \
mod 10$.Keys: 12, 18, 13, 2, 3, 23, 5, 15.

Show the table structure and trace the insertion for:

1. Linear Probing (Open Addressing).


2. Separate Chaining (Open Hashing).

Answer:

1. Linear Probing

Concept: In Linear Probing, if a collision occurs at index i, we check the next index (i+1) %
size. We repeat this until an empty slot is [Link] Function: $h(k) = k \% 10$Table
Size: 10 (Indices 0 to 9)

Step-by-Step Trace:

1. Insert 12: $12 \% 10 = 2$. Slot 2 is Empty. $\rightarrow$ Store 12 at Index 2.


2. Insert 18: $18 \% 10 = 8$. Slot 8 is Empty. $\rightarrow$ Store 18 at Index 8.
3. Insert 13: $13 \% 10 = 3$. Slot 3 is Empty. $\rightarrow$ Store 13 at Index 3.
4. Insert 2: $2 \% 10 = 2$.
o Slot 2 is occupied (by 12). Collision!
o Probe Next: $(2+1) = 3$. Occupied (by 13).
o Probe Next: $(3+1) = 4$. Empty.
o $\rightarrow$ Store 2 at Index 4.

5. Insert 3: $3 \% 10 = 3$.
o Slot 3 occupied (by 13). Collision!

oProbe Next: Slot 4 occupied (by 2).


oProbe Next: Slot 5 Empty.
o$\rightarrow$ Store 3 at Index 5.
6. Insert 23: $23 \% 10 = 3$.
o Slot 3 occupied.
o Slot 4 occupied.
o Slot 5 occupied.
o Slot 6 Empty.
o $\rightarrow$ Store 23 at Index 6.
7. Insert 5: $5 \% 10 = 5$.
Slot 5 occupied (by 3). Collision!
o
Slot 6 occupied (by 23).
o
Slot 7 Empty.
o
$\rightarrow$ Store 5 at Index 7.
o
8. Insert 15: $15 \% 10 = 5$.
o Slot 5 occupied.
o Slot 6 occupied.
o Slot 7 occupied.
o Slot 8 occupied (by 18).
o Slot 9 Empty.
o $\rightarrow$ Store 15 at Index 9.

Final Hash Table (Linear Probing):

| Index | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |

| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |

| Value | - | - | 12 | 13 | 2 | 3 | 23 | 5 | 18 | 15 |

2. Separate Chaining

Concept: Each slot in the hash table points to a Linked List. When a collision occurs, the
new key is simply appended to the list at that index.

Step-by-Step Trace:

1. Insert 12: $12 \% 10 = 2$. Add 12 to list at Index 2.


2. Insert 18: $18 \% 10 = 8$. Add 18 to list at Index 8.
3. Insert 13: $13 \% 10 = 3$. Add 13 to list at Index 3.
4. Insert 2: $2 \% 10 = 2$. Collision at Index 2. Append 2 to the list at Index 2.
o List at 2: 12 -> 2
5. Insert 3: $3 \% 10 = 3$. Collision at Index 3. Append 3.
o List at 3: 13 -> 3
6. Insert 23: $23 \% 10 = 3$. Collision at Index 3. Append 23.
o List at 3: 13 -> 3 -> 23
7. Insert 5: $5 \% 10 = 5$. Add 5 to list at Index 5.
8. Insert 15: $15 \% 10 = 5$. Collision at Index 5. Append 15.
o List at 5: 5 -> 15

Final Hash Table (Separate Chaining):

 Index 0, 1, 4, 6, 7, 9: NULL (Empty)


 Index 2: $\rightarrow [12] \rightarrow [2]$
 Index 3: $\rightarrow [13] \rightarrow [3] \rightarrow [23]$
 Index 5: $\rightarrow [5] \rightarrow [15]$
 Index 8: $\rightarrow [18]$

Q2. Detailed Analysis of Insertion Sort


Question: Sort the list [12, 11, 13, 5, 6] using Insertion Sort and analyze the number of
comparisons and movements.

1. Conceptual Logic

Insertion Sort divides the array into two virtual parts:

1. Sorted Sub-list: Initially contains only the first element (index 0).
2. Unsorted Sub-list: Contains the rest of the elements.

The Process: In each pass, we pick the first element (called the Key) from the unsorted part
and compare it backward into the sorted part. We shift larger elements one position to the
right to make space for the Key.

2. Step-by-Step Trace

Initial State:

 Array: [ 12 | 11, 13, 5, 6 ]


 (Note: The vertical bar | represents the boundary between Sorted and Unsorted
parts.)
 Sorted: [12]
 Unsorted: [11, 13, 5, 6]

Pass 1: (Insert Key = 11)

 Goal: Place 11 into the sorted sub-list [12].


 Comparison 1: Compare Key 11 with 12.
o Is $11 < 12$? Yes.
o Action: Shift 12 to the right.
 Insertion: Key 11 is placed at the empty slot (Index 0).
 Array State: [ 11, 12 | 13, 5, 6 ]
 Metrics: Comparisons: 1, Shifts: 1

Pass 2: (Insert Key = 13)

 Goal: Place 13 into the sorted sub-list [11, 12].


 Comparison 1: Compare Key 13 with 12 (last element of sorted part).
o Is $13 < 12$? No.
o Action: Stop. No shifts needed.
 Insertion: 13 stays at its current position (Index 2).
 Array State: [ 11, 12, 13 | 5, 6 ]
 Metrics: Comparisons: 1, Shifts: 0

Pass 3: (Insert Key = 5)

 Goal: Place 5 into the sorted sub-list [11, 12, 13].


 Comparison 1: Compare 5 with 13.
o $5 < 13$? Yes. Shift 13 right.
 Comparison 2: Compare 5 with 12.
o$5 < 12$? Yes. Shift 12 right.
 Comparison 3: Compare 5 with 11.
o $5 < 11$? Yes. Shift 11 right.
 Insertion: We reached the start of the list. Place 5 at Index 0.
 Array State: [ 5, 11, 12, 13 | 6 ]
 Metrics: Comparisons: 3, Shifts: 3

Pass 4: (Insert Key = 6)

 Goal: Place 6 into the sorted sub-list [5, 11, 12, 13].
 Comparison 1: Compare 6 with 13.
o $6 < 13$? Yes. Shift 13 right.
 Comparison 2: Compare 6 with 12.
o $6 < 12$? Yes. Shift 12 right.
 Comparison 3: Compare 6 with 11.
o $6 < 11$? Yes. Shift 11 right.
 Comparison 4: Compare 6 with 5.
o $6 < 5$? No. Stop.
 Insertion: Place 6 at the position immediately after 5 (Index 1).
 Array State: [ 5, 6, 11, 12, 13 ]
 Metrics: Comparisons: 4, Shifts: 3

3. Summary Table & Efficiency Analysis

Pass No. Key Value Array State After Pass Comparisons Shifts

Initial - [12, 11, 13, 5, 6] - -

1 11 [11, 12, 13, 5, 6] 1 1

2 13 [11, 12, 13, 5, 6] 1 0

3 5 [5, 11, 12, 13, 6] 3 3

4 6 [5, 6, 11, 12, 13] 4 3

Total 9 7

Conclusion:

 Total Operations: The algorithm performed 9 comparisons and 7 shift operations.


 Observation: Notice that Pass 3 and Pass 4 were the most expensive. This is because
the smallest elements (5 and 6) were located at the very end of the original list.
 Complexity Insight: In Insertion Sort, if small elements are at the end, they must
"travel" all the way to the start, triggering maximum shifts. This confirms the Worst
Case Time Complexity of $O(n^2)$ for reverse or nearly reverse-sorted data.

Q3. Scenario Based Application: Student Database Management

Question:

Consider a scenario where you have a database of student IDs (integers) and you need to
retrieve records based on ID very frequently. The IDs are not sequential (e.g., 1001, 5092,
1024).

1. Which searching technique would you suggest and why?


2. If you must use a sorting algorithm to prepare the data for Binary Search, which
one would you choose if memory is limited?

Answer:

1. Introduction

The problem presents two specific constraints:

 Data Nature: Non-sequential integer IDs.


 Performance Requirement: "Very frequent" retrieval implies we need the fastest
possible search time.
 System Constraint: Limited memory (RAM).

Based on these parameters, we analyze the best Searching and Sorting techniques below.

Part 1: Searching Technique Recommendation

Recommendation: Hashing (specifically using a Hash Table).

Detailed Justification:

A. Time Complexity Analysis (The Primary Reason)

 Linear Search: Checking every ID one by one takes $O(n)$ time. For 1 million
students, this is too slow.
 Binary Search: Requires sorted data. It takes $O(\log n)$ time. For 1 million records
($2^{20}$), it takes approx 20 comparisons. While fast, it is not "instant."
 Hashing: Hashing maps a key directly to an address in memory.
o Average Case: $O(1)$ (Constant Time).
o Impact: Whether you have 100 students or 10 million students, retrieving a
record takes roughly 1 calculation. For high-frequency systems, this
difference is massive.
B. Suitability for Non-Sequential Data

 Since IDs are integers like 1001, 5092, etc., we cannot use an array index directly
(Direct Addressing) because we would need an array size equal to the largest ID (e.g.,
if ID is 999999, we need 1 million slots even if we only have 5 students).
 Hashing solves this by using a Hash Function (e.g., $h(x) = x \mod \text{TableSize}
$) to map these large, scattered integers into a compact table.

C. Comparison Summary

Feature Hashing Binary Search Linear Search

Pre-requisite None (Table creation) Data must be Sorted None

Time Complexity $O(1)$ (Avg) $O(\log n)$ $O(n)$

Suitability Best for exact match Best for range queries Small data only

Conclusion for Part 1: Hashing is selected because $O(1)$ access speed is ideal for
"frequent retrieval," and it efficiently handles non-sequential keys.

Part 2: Sorting Algorithm Recommendation

Context: The problem states that if we must sort the data (perhaps to enable Binary Search or
print a class list) and Memory is Limited, which algorithm fits best?

Recommendation: Heap Sort.

Detailed Analysis of Candidates:

To choose the right algorithm, we look at Space Complexity (memory usage) and Worst-
Case Performance.

1. Why Merge Sort is REJECTED:

 Mechanism: Merge sort divides the array and then merges sorted halves.
 Memory Issue: To merge two arrays, Merge Sort requires an auxiliary array of size
$n$.
 Space Complexity: $O(n)$.
 Verdict: Since the problem explicitly states "Memory is Limited," allocating double
the memory (for the auxiliary array) is unacceptable.

2. Why Quick Sort is RISKY:

 Mechanism: Uses partitioning around a pivot.


 Memory Issue: It sorts in-place (no extra array), but it uses Stack Space for
recursion.
 Risk: In the worst case (e.g., array is already sorted), the recursion stack can grow to
depth $n$.
 Space Complexity: $O(\log n)$ average, but $O(n)$ worst case.
 Verdict: Good, but risky if the data causes worst-case behavior.

3. Why Heap Sort is SELECTED:

 Mechanism: It views the array as a Complete Binary Tree (Heap).


1. Build Max Heap: Organize data so the largest is at the root.
2. Extract Max: Swap root with the last element and reduce heap size.
 Memory Advantage: It is strictly an In-Place Algorithm. It rearranges elements
within the existing array structure.
 Space Complexity: $O(1)$ (Only requires a insignificant amount of constant space
for variables).
 Time Reliability: Unlike Quick Sort, Heap Sort guarantees $O(n \log n)$ time even
in the worst case.

Comparison Table for Decision:

Algorithm Time (Worst) Space (Worst) Stable? Verdict

Merge Sort $O(n \log n)$ $O(n)$ Yes Rejected (High Memory)

Quick Sort $O(n^2)$ $O(n)$ No Risky (Stack Overflow)

Heap Sort $O(n \log n)$ $O(1)$ No Selected (Efficient & Safe)

Final Conclusion

For a frequent retrieval system with non-sequential IDs and limited memory:

1. Storage/Search Strategy: Use Hashing with Open Addressing (to save space on
pointers) for $O(1)$ fast access.
2. Maintenance/Sorting Strategy: Use Heap Sort to sort the data when needed, as it
guarantees efficient sorting ($n \log n$) without consuming extra RAM ($O(1)$
space).

UNIT – 5
PART A (2 Marks) - Short Answer Questions

1. Define a Graph ADT.


A Graph ADT is a non-linear data structure consisting of a set of vertices $V$ and a set of
edges $E$ connecting pairs of vertices. It supports operations like adding vertices, adding
edges, removing vertices/edges, and traversing the graph.

2. List the different ways to represent a graph.

Graphs are primarily represented in two ways:

 Adjacency Matrix: A 2D array where A[i][j] = 1 (or weight) if there is an edge


between vertex i and j, else 0. * Adjacency List: An array of linked lists where each
list index i stores the vertices adjacent to vertex i.

3. Differentiate between BFS and DFS.

 BFS (Breadth-First Search): Traverses the graph level by level using a Queue. It
finds the shortest path in unweighted graphs.
 DFS (Depth-First Search): Traverses as deep as possible along each branch before
backtracking, using a Stack (or recursion).

4. What is a Directed Acyclic Graph (DAG)?

A DAG is a directed graph that contains no cycles. It is a fundamental structure used in


scheduling problems and topological sorting.

5. Define Topological Ordering.

Topological ordering of a DAG is a linear ordering of its vertices such that for every directed
edge $uv$ from vertex $u$ to vertex $v$, vertex $u$ comes before $v$ in the ordering.

6. What is a Minimum Spanning Tree (MST)?

An MST of a connected, undirected, weighted graph is a subgraph that connects all vertices
together with the minimum possible total edge weight and contains no cycles.

7. State the principle of Greedy Algorithms.

Greedy algorithms make the locally optimal choice at each step with the hope of finding a
global optimum. Examples: Prim's and Kruskal's algorithms for MST, Dijkstra's for shortest
path.

8. What is Dynamic Programming? How is it different from Greedy?

Dynamic Programming (DP) solves complex problems by breaking them down into simpler
overlapping subproblems and storing the results (memoization) to avoid redundant
computations. unlike Greedy, DP considers all possible options to find the global optimum.

9. Define P and NP complexity classes.

 P (Polynomial time): The set of decision problems solvable by a deterministic


algorithm in polynomial time.
 NP (Nondeterministic Polynomial time): The set of decision problems verifiable in
polynomial time (solution can be checked quickly).
PART B (13 Marks) - Descriptive Questions

1: Explain Graph Traversals (BFS and DFS) with suitable algorithms and examples.

Answer:

1. Introduction to Graph Traversal

Graph traversal is the process of visiting (checking and/or updating) each vertex in a graph
exactly once. Unlike trees, graphs contain cycles, so we must keep track of "Visited" nodes to
avoid infinite loops. The two most common traversal techniques are Breadth-First Search
(BFS) and Depth-First Search (DFS).

2. Breadth-First Search (BFS)

Definition:

BFS is a traversal algorithm that starts at a selected node (source) and explores all of its
immediate neighbors at the present depth before moving on to nodes at the next depth level.
It explores the graph "layer by layer."

 Data Structure Used: Queue (FIFO - First In, First Out).


 Principle: Level Order Traversal.

Algorithm:

1. Initialize a boolean array visited[] to false and create an empty Queue Q.


2. Select a starting vertex S, mark it as visited, and enqueue it into Q.
3. While Q is not empty:
o Dequeue a vertex V from Q and print/process it.
o For every adjacent vertex U of V:
 If U is not visited, mark U as visited and enqueue it.

Pseudocode:

Python
BFS(Graph, StartVertex):
Queue Q
Set Visited = {}

[Link](StartVertex)
[Link](StartVertex)

while Q is not empty:


Current = [Link]()
print(Current)

for neighbor in Graph[Current]:


if neighbor not in Visited:
[Link](neighbor)
[Link](neighbor)

Example Trace:

Consider the following Graph:

 Nodes: A, B, C, D, E, F
 Edges: (A-B), (A-C), (B-D), (B-E), (C-F)

Trace (Starting Node: A):

Queue Status (Front →


Step Operation Visited Array Output
Rear)

Start at A. Mark A visited.


1 [A] {A} -
Enqueue A.

Dequeue A. Enqueue neighbors


2 [B, C] {A, B, C} A
B, C.

Dequeue B. Enqueue neighbors


3 [C, D, E] {A, B, C, D, E} B
D, E.

{A, B, C, D, E,
4 Dequeue C. Enqueue neighbor F. [D, E, F] C
F}

Dequeue D. No unvisited
5 [E, F] {All} D
neighbors.

Dequeue E. No unvisited
6 [F] {All} E
neighbors.

Dequeue F. No unvisited
7 [] (Empty) {All} F
neighbors.

Final BFS Output: A, B, C, D, E, F

3. Depth-First Search (DFS)

Definition:
DFS is a traversal algorithm that starts at the root node and explores as far as possible along
each branch before backtracking. It goes "deep" into the graph structure.

 Data Structure Used: Stack (LIFO - Last In, First Out) or Recursion (Implicit
Stack).
 Principle: Backtracking.

Algorithm:

1. Initialize a boolean array visited[] to false.


2. Push the starting vertex S onto the Stack.
3. While the Stack is not empty:
o Pop a vertex V from the Stack.
o If V is not visited:
 Mark V as visited and print it.
 Push all unvisited adjacent vertices of V onto the Stack.

Pseudocode (Recursive Approach):

Python
DFS(Graph, Vertex, Visited):
Mark Vertex as Visited
print(Vertex)

for neighbor in Graph[Vertex]:


if neighbor not in Visited:
DFS(Graph, neighbor, Visited)

Example Trace:

Using the same Graph as above:

 Edges: (A-B), (A-C), (B-D), (B-E), (C-F)

Trace (Starting Node: A):

Note: We assume neighbors are processed in alphabetical order for consistency.

Recursion Stack / Stack


Step Operation Visited Output
Status

1 Visit A. Go to first neighbor B. Stack: [A] {A} A

2 Visit B. Go to first neighbor D. Stack: [A, B] {A, B} B

Visit D. No neighbors. Backtrack to Stack: [A, B, D] $\


3 {A, B, D} D
B. rightarrow$ Pop D
Recursion Stack / Stack
Step Operation Visited Output
Status

4 Back at B. Go to next neighbor E. Stack: [A, B] - -

Visit E. No neighbors. Backtrack to Stack: [A, B, E] $\ {A, B, D,


5 E
B. rightarrow$ Pop E E}

Back at B. No more neighbors.


6 Stack: [A] - -
Backtrack to A.

7 Back at A. Go to next neighbor C. Stack: [A] - -

8 Visit C. Go to neighbor F. Stack: [A, C] {..., C} C

Stack: [A, C, F] $\rightarrow$


9 Visit F. No neighbors. Backtrack. {..., F} F
Pop F

Final DFS Output: A, B, D, E, C, F

4. Complexity Analysis

For a Graph with $V$ vertices and $E$ edges:

 Time Complexity:
o Adjacency List: $O(V + E)$ (Each vertex and edge is visited once).
o Adjacency Matrix: $O(V^2)$ (Scanning the entire row for each vertex).
 Space Complexity: $O(V)$ (For the Queue/Stack and Visited array).

5. Comparison and Applications


Feature BFS (Breadth-First Search) DFS (Depth-First Search)

Structure Uses Queue (FIFO). Uses Stack (LIFO) or Recursion.

Behavior Explores neighbors level by level. Explores path deeply, then backtracks.

Guarantees shortest path in


Shortest Path Does not guarantee shortest path.
unweighted graphs.
Feature BFS (Breadth-First Search) DFS (Depth-First Search)

Peer-to-Peer Networks, GPS Maze solving, Cycle Detection,


Applications
Navigation. Topological Sort.

Question: Discuss the Shortest Path algorithms: Dijkstra’s and Bellman-Ford.

Answer:

1. Introduction

Shortest Path algorithms are designed to find the path with the minimum total edge weight
between two nodes in a graph. The two most fundamental algorithms for the "Single-Source
Shortest Path" problem (finding shortest paths from one source node to all other nodes) are
Dijkstra's Algorithm and Bellman-Ford Algorithm.

2. Dijkstra’s Algorithm

Concept:

Dijkstra's algorithm is a Greedy Algorithm. It maintains a set of visited vertices and a set of
unvisited vertices. At every step, it selects the unvisited vertex with the smallest known
distance from the source, visits it, and updates the distances of its neighbors.

Key Characteristics:

 Approach: Greedy (Always picks the closest known node).


 Data Structure: Priority Queue (Min-Heap) is often used for efficiency.
 Constraint: It fails if the graph contains edges with negative weights.

Algorithm Steps:

1. Initialization: Set distance to the source node as 0 and infinity (∞) for all other nodes.
Mark all nodes as unvisited.
2. Selection: Pick the unvisited node u with the smallest distance.
3. Relaxation: For every neighbor v of u, check if the current path to v through u is
shorter than the previously known distance to v.
o Condition: If dist[u] + weight(u, v) < dist[v]
o Update: dist[v] = dist[u] + weight(u, v)
4. Repeat: Repeat steps 2 and 3 until all nodes are visited or the destination is reached.

Pseudocode:

Python
function Dijkstra(Graph, Source):
dist[] = {infinity, infinity, ...}
dist[Source] = 0
PriorityQueue Q
[Link](Source, 0)

while Q is not empty:


u = Q.extract_min() # Get node with smallest distance

for each neighbor v of u:


alt = dist[u] + weight(u, v)
if alt < dist[v]: # Relaxation Step
dist[v] = alt
[Link](v, alt)

Example Trace:

Consider a graph: A $\rightarrow$ B (4), A $\rightarrow$ C (1), C $\rightarrow$ B (2), B $\


rightarrow$ D (1), C $\rightarrow$ D (5).

 Start at A: dist[A]=0, others ∞.


 Visit A: Neighbors B, C.
o dist[B] = min(∞, 0+4) = 4
o dist[C] = min(∞, 0+1) = 1
 Next closest is C (dist=1): Visit C. Neighbor B, D.
o Path A$\rightarrow$C$\rightarrow$B: 1 + 2 = 3. Since 3 < 4, update dist[B] =
3.
o Path A$\rightarrow$C$\rightarrow$D: 1 + 5 = 6. Update dist[D] = 6.
 Next closest is B (dist=3): Visit B. Neighbor D.
o Path A$\rightarrow$C$\rightarrow$B$\rightarrow$D: 3 + 1 = 4. Since 4 < 6,
update dist[D] = 4.
 Final Distances: A:0, C:1, B:3, D:4.

3. Bellman-Ford Algorithm

Concept:

Bellman-Ford is based on Dynamic Programming. Unlike Dijkstra's, which greedily chooses


the closest node, Bellman-Ford relaxes all edges repeatedly. This brute-force-like approach
allows it to handle more complex scenarios.

Key Characteristics:

 Approach: Dynamic Programming (Bottom-up).


 Capability: Can handle Negative Weight Edges.
 Feature: Can detect Negative Weight Cycles (a cycle where the total sum of edge
weights is negative).

Algorithm Steps:

1. Initialization: Set distance to source as 0 and infinity (∞) for others.


2. Relaxation: Repeat the following process |V| - 1 times (where |V| is the number of
vertices):
o For every edge (u, v) with weight w in the graph:
oIf dist[u] + w < dist[v], then dist[v] = dist[u] + w.
3. Negative Cycle Check: Perform one more relaxation pass. If any distance decreases,
a negative weight cycle exists.

Pseudocode:

Python
function BellmanFord(Graph, Source):
dist[] = {infinity, infinity, ...}
dist[Source] = 0

# Step 1: Relax all edges |V| - 1 times


for i from 1 to |V|-1:
for each edge (u, v) with weight w:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w

# Step 2: Check for negative weight cycles


for each edge (u, v) with weight w:
if dist[u] + w < dist[v]:
return "Error: Negative Cycle Detected"

4. Detailed Comparison
Feature Dijkstra’s Algorithm Bellman-Ford Algorithm

Paradigm Greedy Approach. Dynamic Programming.

Negative Cannot handle negative weights (may


Can handle negative weights.
Weights give wrong answer or infinite loop).

Efficiency Faster: $O(E \log V)$ (using Min-


Slower: $O(V \times E)$.
(Time) Heap).

Relaxes edges of the current specific Relaxes every single edge in the
Relaxation
node only. graph V-1 times.

Used in GPS systems, network routing Used in routing protocols like RIP,
Usage
(OSPF). and finance (arbitrage detection).

5. Conclusion

 If edge weights are non-negative and speed is crucial, Dijkstra’s is the best choice.
 If the graph contains negative weights or if you need to detect negative cycles,
Bellman-Ford is mandatory despite its slower performance.
Q3. Explain the construction of a Minimum Spanning Tree using Prim’s Algorithm.

Answer:

1. Introduction

A Minimum Spanning Tree (MST) of a connected, undirected, weighted graph is a subgraph


that connects all vertices together with the minimum possible total edge weight and contains
no cycles.

Prim’s Algorithm is a greedy algorithm used to find the MST. It operates by building the tree
one vertex at a time, always adding the cheapest connection from the tree to a node outside
the tree.

2. Algorithm Steps

1. Initialize: Start with an arbitrary node (source). Maintain three values for every
vertex:
o Key (Minimum weight edge connecting it to the MST, initialized to $\infty$).
o Parent (The node in the MST it is connected to, initialized to Null).
o Visited (Boolean status, initialized to False).
2. Start: Set the Key of the source vertex to 0.
3. Iterate: Repeat until all vertices are included in the MST:
o Select the unvisited vertex u with the smallest Key value.
o Mark u as Visited.
o For every adjacent vertex v of u:
 If v is not visited and the weight of edge (u, v) is smaller than the
current Key of v:
 Update Key[v] = weight(u, v).
 Update Parent[v] = u.

3. Illustrative Trace

Graph:

 Vertices: A, B, C, D, E
 Edges: (A-B: 2), (A-C: 3), (B-C: 1), (B-D: 4), (B-E: 5), (C-E: 6), (D-E: 2)

Step-by-Step Table:

Vertex Vertex Vertex Vertex Vertex MST Set


Step Operation
A B C D E (Visited)

Init Start at A 0 (P: -) $\infty$ $\infty$ $\infty$ $\infty$ {}

Pick min (A). Relax B,


1 Visited 2 (P: A) 3 (P: A) $\infty$ $\infty$ {A}
C.

2 Pick min (B, wt 2). Visited Visited 1 (P: B) 4 (P: B) 5 (P: B) {A, B}
Vertex Vertex Vertex Vertex Vertex MST Set
Step Operation
A B C D E (Visited)

Relax C, D, E.

Pick min (C, wt 1).


3 Visited Visited Visited 4 (P: B) 5 (P: B) {A, B, C}
Relax E.

Pick min (D, wt 4).


4 Visited Visited Visited Visited 2 (P: D) {A, B, C, D}
Relax E.

Pick min (E, wt 2). {A, B, C, D,


5 Visited Visited Visited Visited Visited
Done. E}

Note: In Step 3, connecting C to E (wt 6) doesn't update E because its current key (5 from B)
is smaller. In Step 4, D connects to E with weight 2, which is better than 5, so E updates.

Final MST Edges:

1. (A, B) - Weight 2
2. (B, C) - Weight 1
3. (B, D) - Weight 4
4. (D, E) - Weight 2

Total Weight: $2 + 1 + 4 + 2 = 9$.

4. Complexity Analysis

 Using Adjacency Matrix: $O(V^2)$.


 Using Binary Heap (Priority Queue) + Adjacency List: $O(E \log V)$.

Q4. Explain Topological Sort with an example.

Answer:

1. Introduction

Topological Sort is a linear ordering of vertices in a Directed Acyclic Graph (DAG) such
that for every directed edge $U \rightarrow V$, vertex $U$ appears before vertex $V$ in the
ordering.

 It is impossible if the graph contains a cycle.


 Applications: Scheduling tasks, resolving dependencies (e.g., Makefiles, Course
Prerequisites), compiling code.
2. Algorithm (Kahn’s Algorithm)

This is a BFS-based approach relying on In-Degree (number of incoming edges).

1. Calculate In-Degree: Compute the in-degree for every vertex in the graph.
2. Initialize Queue: Enqueue all vertices with In-Degree == 0 (nodes with no
dependencies).
3. Process Queue: While the queue is not empty:
o Dequeue a vertex u and add it to the Topological Order list.
o For every neighbor v of u:
 Decrease the in-degree of v by 1 (simulating the removal of u).
 If In-Degree[v] becomes 0, enqueue v.
4. Check: If the Topological Order list contains fewer vertices than the graph, a cycle
exists.

3. Example Trace

Scenario: Course Prerequisites.

 Tasks: 1 (Math), 2 (Physics), 3 (CS Basics), 4 (Advanced CS), 5 (Project).


 Dependencies (Edges): $1 \rightarrow 2$, $1 \rightarrow 3$, $2 \rightarrow 4$, $3 \
rightarrow 4$, $4 \rightarrow 5$.

Step-by-Step Execution:

Queue Output
Step In-Degree Array [1, 2, 3, 4, 5] Action
Status List

[0, 1, 1, 2, 0] (Wait, 5 needs 4...


1 has in-degree 0. Enqueue
Init [1] assuming 5 is dependent on 4) Let's fix []
1.
In-degrees: 1(0), 2(1), 3(1), 4(2), 5(1)

Dequeue 1. Reduce
1 [] [-, 0, 0, 2, 1] [1] neighbors 2 & 3. Both
become 0. Enqueue 2, 3.

Dequeue 2. Reduce
2 [3] [-, -, 0, 1, 1] [1, 2]
neighbor 4 (2 $\to$ 1).

Dequeue 3. Reduce
3 [] [-, -, -, 0, 1] [1, 2, 3] neighbor 4 (1 $\to$ 0).
Enqueue 4.

[1, 2, 3, Dequeue 4. Reduce


4 [5] [-, -, -, -, 0]
4] neighbor 5. Enqueue 5.
Queue Output
Step In-Degree Array [1, 2, 3, 4, 5] Action
Status List

[1, 2, 3,
5 [] [-, -, -, -, -] Dequeue 5. Done.
4, 5]

Final Topological Order: 1, 2, 3, 4, 5 (Note: 1, 3, 2, 4, 5 is also valid).

4. Complexity

 Time Complexity: $O(V + E)$ (Each vertex and edge is processed once).
 Space Complexity: $O(V)$ (For storing in-degrees and the queue).

Q5. What is Dynamic Programming? Explain the Floyd-Warshall algorithm for All-
Pairs Shortest Paths.

Answer:

1. Introduction to Dynamic Programming (DP)

Dynamic Programming is an algorithmic paradigm used to solve complex problems by


breaking them down into simpler overlapping subproblems. It stores the results of these
subproblems (memoization or tabulation) to avoid redundant computations.

 Key Property: Optimal Substructure (Optimal solution to the problem contains


optimal solutions to subproblems).
 Difference from Greedy: Greedy makes a locally optimal choice at each step; DP
exhaustively considers all possible choices to find the global optimum.

2. Floyd-Warshall Algorithm

This is an All-Pairs Shortest Path algorithm. It finds the shortest distances between every
pair of vertices in a weighted graph. It works with positive and negative edge weights (but no
negative cycles).

Core Concept:

Let $D[i][j]$ be the shortest distance from vertex $i$ to vertex $j$. The algorithm iteratively
improves the estimate of the shortest path between two vertices ($i, j$) by checking if a path
through an intermediate vertex $k$ is shorter than the direct path.

Recurrence Relation:

$$D^k[i][j] = \min( D^{k-1}[i][j], \quad D^{k-1}[i][k] + D^{k-1}[k][j] )$$

 $D^{k-1}[i][j]$: Current known shortest distance.


 $D^{k-1}[i][k] + D^{k-1}[k][j]$: Distance going from $i$ to $j$ via vertex $k$.
3. Algorithm Steps

1. Initialize: Create a $|V| \times |V|$ matrix.


o If $i == j$, dist is 0.
o If edge $(i, j)$ exists, dist is weight$(i, j)$.
o Else, dist is $\infty$.
2. Iterate: Use three nested loops:
o Outer loop $k$ from 1 to $V$ (pick intermediate node).
o Middle loop $i$ from 1 to $V$ (source).
o Inner loop $j$ from 1 to $V$ (destination).
o Update $D[i][j]$ using the recurrence relation.

4. Example Matrix Trace

Graph: Nodes 1, 2, 3. Edges: $1 \to 2$ (4), $2 \to 1$ (3), $2 \to 3$ (1), $3 \to 1$ (6).

Initial Matrix ($D^0$):

||1|2|3|

|---|---|---|---|

| 1 | 0 | 4 | $\infty$ |

|2|3|0|1|

| 3 | 6 | $\infty$ | 0 |

Iteration $k=1$ (Via Node 1):

 Check if going via 1 improves paths.


 Update $D[2][3]$? Direct is 1. Via 1: $2 \to 1 \to 3$ ($3 + \infty = \infty$). No
change.
 Update $D[3][2]$? Direct $\infty$. Via 1: $3 \to 1 \to 2$ ($6 + 4 = 10$). Update to
10.

Iteration $k=2$ (Via Node 2):

 Update $D[1][3]$? Direct $\infty$. Via 2: $1 \to 2 \to 3$ ($4 + 1 = 5$). Update to 5.
 Update $D[3][1]$? Direct 6. Via 2: $3 \to 2 \to 1$ ($10 + 3 = 13$). No change (6 is
better).

Final Matrix:

||1|2|3|

|---|---|---|---|

|1|0|4|5|

|2|3|0|1|
| 3 | 6 | 10 | 0 |

5. Complexity

 Time Complexity: $O(V^3)$ (Three nested loops).


 Space Complexity: $O(V^2)$ (To store the distance matrix).
 Suitability: Best for dense graphs or small graphs where $V < 500$.

PART C (15 Marks) - Descriptive Questions

Question:

Consider a network of cities connected by roads with specific lengths. Apply Dijkstra’s
Algorithm to find the shortest path from a starting city (A) to all other cities.

Graph Edges:

 $A \rightarrow B (4)$
 $A \rightarrow C (2)$
 $B \rightarrow C (5)$
 $B \rightarrow D (10)$
 $C \rightarrow E (3)$
 $E \rightarrow D (4)$
 $D \rightarrow F (11)$

Answer:

1. Introduction and Graph Representation

Dijkstra’s Algorithm is a greedy algorithm that finds the shortest path from a single source
vertex to all other vertices in a weighted graph with non-negative edge weights.

Objective: Find the minimum distance from Source Node A to nodes B, C, D, E, F.

2. Initialization

We maintain a table to track the Shortest Distance found so far and the Predecessor
(Parent) for path reconstruction.

 Set $Q$ (Unvisited Nodes): $\{A, B, C, D, E, F\}$


 Distance to Source (A): 0
 Distance to others: $\infty$ (Infinity)

Initial State Table:

| Vertex | Shortest Distance ($D$) | Previous Node ($P$) | Status |


| :--- | :--- | :--- | :--- |

| A | 0 | - | Unvisited |

| B | $\infty$ | - | Unvisited |

| C | $\infty$ | - | Unvisited |

| D | $\infty$ | - | Unvisited |

| E | $\infty$ | - | Unvisited |

| F | $\infty$ | - | Unvisited |

3. Step-by-Step Execution (Iterations)

Iteration 1: Select Vertex A

 Current Node: A (Distance = 0).


 Neighbors of A:
1. B: Cost = $0 + 4 = 4$. ($4 < \infty$, Update B). Parent of B $\rightarrow$ A.
2. C: Cost = $0 + 2 = 2$. ($2 < \infty$, Update C). Parent of C $\rightarrow$ A.
 Mark A as Visited.
 Unvisited Set: $\{B, C, D, E, F\}$

Table after Iteration 1:

| Vertex | Distance | Parent |

| :--- | :--- | :--- |

|A|0|-|

|B|4|A|

|C|2|A|

| D, E, F | $\infty$ | - |

Iteration 2: Select Vertex C

 Selection: Among unvisited $\{B(4), C(2), D(\infty), E(\infty), F(\infty)\}$, the


vertex with the minimum distance is C (2).
 Current Node: C (Distance = 2).
 Neighbors of C:
1. E: Cost = $D[C] + \text{weight}(C, E) = 2 + 3 = 5$.
2. Update: Since $5 < \infty$, update $D[E] = 5$. Set Parent of E $\rightarrow$
C.
 Mark C as Visited.
 Unvisited Set: $\{B, D, E, F\}$

Table after Iteration 2:

| Vertex | Distance | Parent |

| :--- | :--- | :--- |

|A|0|-|

|C|2|A|

|B|4|A|

|E|5|C|

| D, F | $\infty$ | - |

Iteration 3: Select Vertex B

 Selection: Among unvisited $\{B(4), E(5), D(\infty), F(\infty)\}$, minimum is B (4).


 Current Node: B (Distance = 4).
 Neighbors of B:
1. C: Edge $B \rightarrow C$ (Weight 5). Path cost = $4 + 5 = 9$.
 Current $D[C] = 2$. Since $9 > 2$, Do Not Update.
2. D: Edge $B \rightarrow D$ (Weight 10). Path cost = $4 + 10 = 14$.
 Update $D[D] = 14$. Set Parent of D $\rightarrow$ B.
 Mark B as Visited.
 Unvisited Set: $\{D, E, F\}$

Table after Iteration 3:

| Vertex | Distance | Parent |

| :--- | :--- | :--- |

| A, B, C | Visited | |

|E|5|C|

| D | 14 | B |

| F | $\infty$ | - |

Iteration 4: Select Vertex E

 Selection: Among unvisited $\{E(5), D(14), F(\infty)\}$, minimum is E (5).


 Current Node: E (Distance = 5).
 Neighbors of E:
1. D: Edge $E \rightarrow D$ (Weight 4).
 New Path Cost = $D[E] + 4 = 5 + 4 = 9$.
 Current $D[D] = 14$.
 Relaxation: Since $9 < 14$, we found a shorter path! Update $D[D] =
9$. Change Parent of D $\rightarrow$ E.
 Mark E as Visited.
 Unvisited Set: $\{D, F\}$

Table after Iteration 4:

| Vertex | Distance | Parent |

| :--- | :--- | :--- |

| A, B, C, E | Visited | |

|D|9|E|

| F | $\infty$ | - |

Iteration 5: Select Vertex D

 Selection: Among unvisited $\{D(9), F(\infty)\}$, minimum is D (9).


 Current Node: D (Distance = 9).
 Neighbors of D:
1. F: Edge $D \rightarrow F$ (Weight 11).
 Path Cost = $9 + 11 = 20$.
 Update $D[F] = 20$. Parent of F $\rightarrow$ D.
 Mark D as Visited.
 Unvisited Set: $\{F\}$

Table after Iteration 5:

| Vertex | Distance | Parent |

| :--- | :--- | :--- |

| A...E | Visited | |

| F | 20 | D |

Iteration 6: Select Vertex F

 Selection: Only F (20) remains.


 Neighbors: None.
 Mark F as Visited.
 Algorithm Terminated.
4. Final Output Table
Destination Min Distance
Shortest Path Sequence
City from A

A 0 A (Start)

B 4 A $\rightarrow$ B

C 2 A $\rightarrow$ C

E 5 A $\rightarrow$ C $\rightarrow$ E

D 9 A $\rightarrow$ C $\rightarrow$ E $\rightarrow$ D

A $\rightarrow$ C $\rightarrow$ E $\rightarrow$ D $\


F 20
rightarrow$ F

5. Conclusion

Using Dijkstra's algorithm, we successfully determined the shortest paths.

 Note specifically that for City D, the path through B ($A \rightarrow B \rightarrow
D$) cost 14, but the algorithm correctly identified the path through E ($A \rightarrow
C \rightarrow E \rightarrow D$) which only cost 9. This demonstrates the
"Relaxation" property of the algorithm.

Question:

For the given graph, construct the Minimum Spanning Tree (MST) using (a) Prim’s
Algorithm and (b) Kruskal’s Algorithm. Calculate the total cost.

Solution:

Assumption: Since a specific graph was not provided in the prompt, let us consider the
following weighted undirected graph with 5 Vertices (Nodes 1 to 5) and 7 Edges.

Graph Details (Edges and Weights):

 Edge (1, 2): Weight 2


 Edge (1, 3): Weight 3
 Edge (2, 3): Weight 1
 Edge (2, 4): Weight 1
 Edge (2, 5): Weight 4
 Edge (3, 5): Weight 5
 Edge (4, 5): Weight 1

(a) Prim’s Algorithm

Concept: Prim's algorithm is a greedy algorithm. It grows the MST from a starting vertex
(arbitrarily chosen as Node 1) by always adding the cheapest edge that connects a vertex in
the tree to a vertex outside the tree.

Step-by-Step Construction:

 Initialization:
o Visited Set: $\{1\}$
o MST Edges: $\{\}$
o Total Cost: $0$

Step 1:

 Current Tree: $\{1\}$


 Available Edges from Tree:
o $(1, 2)$ - Weight 2
o $(1, 3)$ - Weight 3
 Selection: Minimum weight is 2 (Edge $1-2$).
 Action: Add Node 2 to Visited.
 Result: Visited $\{1, 2\}$, MST Edges $\{(1, 2)\}$.

Step 2:

 Current Tree: $\{1, 2\}$


 Available Edges from Tree (connecting to unvisited nodes):
o $(1, 3)$ - Weight 3
o $(2, 3)$ - Weight 1
o $(2, 4)$ - Weight 1
o $(2, 5)$ - Weight 4
 Selection: Minimum weight is 1. We have a tie between $(2, 3)$ and $(2, 4)$. Let's
pick $(2, 3)$.
 Action: Add Node 3 to Visited.
 Result: Visited $\{1, 2, 3\}$, MST Edges $\{(1, 2), (2, 3)\}$.

Step 3:

 Current Tree: $\{1, 2, 3\}$


 Available Edges from Tree:
o $(1, 3)$ - Rejected (Both 1 & 3 are in tree)
o $(2, 4)$ - Weight 1
o $(2, 5)$ - Weight 4
o $(3, 5)$ - Weight 5
 Selection: Minimum weight is 1 (Edge $2-4$).
 Action: Add Node 4 to Visited.
 Result: Visited $\{1, 2, 3, 4\}$, MST Edges $\{(1, 2), (2, 3), (2, 4)\}$.

Step 4:

 Current Tree: $\{1, 2, 3, 4\}$


 Available Edges from Tree:
o $(2, 5)$ - Weight 4
o $(3, 5)$ - Weight 5
o $(4, 5)$ - Weight 1
 Selection: Minimum weight is 1 (Edge $4-5$).
 Action: Add Node 5 to Visited.
 Result: Visited $\{1, 2, 3, 4, 5\}$. All nodes visited. STOP.

Final MST (Prim's):

Edges: $\{(1, 2), (2, 3), (2, 4), (4, 5)\}$

Total Cost: $2 + 1 + 1 + 1 = \mathbf{5}$

(b) Kruskal’s Algorithm

Concept: Kruskal's algorithm is also a greedy algorithm but works differently. It treats the
graph as a forest of trees. It sorts all edges by weight and adds the smallest edge to the MST,
provided it does not form a cycle.

Step-by-Step Construction:

1. Table of Edges (Sorted):

Edge Weight

(2, 3) 1

(2, 4) 1

(4, 5) 1

(1, 2) 2

(1, 3) 3

(2, 5) 4

(3, 5) 5
Edge Weight

2. Selection Process:

 Iteration 1:
o Select Edge (2, 3) (Weight 1).
o Does it form a cycle? No.
o Action: Accept.
 Iteration 2:
o Select Edge (2, 4) (Weight 1).
o Does it form a cycle? No.
o Action: Accept.
 Iteration 3:
o Select Edge (4, 5) (Weight 1).
o Does it form a cycle? No.
o Action: Accept.
 Iteration 4:
o Select Edge (1, 2) (Weight 2).
o Does it form a cycle? No. (Connects Node 1 to the cluster $\{2,3,4,5\}$).
o Action: Accept.
 Iteration 5:
o Select Edge (1, 3) (Weight 3).
o Does it form a cycle? Yes. (Nodes 1 and 3 are already connected via 1-2-3).
o Action: Reject.
 Iteration 6:
o Select Edge (2, 5) (Weight 4).
o Does it form a cycle? Yes. (Nodes 2 and 5 are connected via 2-4-5).
o Action: Reject.
 Iteration 7:
o Select Edge (3, 5) (Weight 5).
o Does it form a cycle? Yes.
o Action: Reject.

Final MST (Kruskal's):

Edges: $\{(2, 3), (2, 4), (4, 5), (1, 2)\}$

Total Cost: $1 + 1 + 1 + 2 = \mathbf{5}$

Conclusion

 Prim's Algorithm Total Cost: 5


 Kruskal's Algorithm Total Cost: 5
Result: Both algorithms yielded the same minimum total cost. The set of edges selected is
identical in this case (though in graphs with duplicate weights, the specific edges might vary,
the total cost will always be the same).

Total Minimum Cost = 5

You might also like