0% found this document useful (0 votes)
13 views13 pages

Thread

Threads are lightweight processes that allow concurrent execution within a program, enhancing performance and responsiveness by sharing resources. They can be categorized into user-level and kernel-level threads, each with its advantages and disadvantages. Multithreading models, such as one-to-one and many-to-many, facilitate efficient task execution across multicore systems, making them essential for modern applications like web servers and banking systems.

Uploaded by

janitkumar704
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views13 pages

Thread

Threads are lightweight processes that allow concurrent execution within a program, enhancing performance and responsiveness by sharing resources. They can be categorized into user-level and kernel-level threads, each with its advantages and disadvantages. Multithreading models, such as one-to-one and many-to-many, facilitate efficient task execution across multicore systems, making them essential for modern applications like web servers and banking systems.

Uploaded by

janitkumar704
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Thread

In computing, a thread refers to a lightweight process that


can execute independently of the main program. Threads
are a fundamental component of modern operating
systems and programming languages, and they allow
developers to write concurrent and parallel applications
that can perform multiple tasks simultaneously.

In an operating system, a thread is a lightweight unit


of execution within a process. A process is an
instance of a program that is being executed, and a
thread is a subset of the process that can run
concurrently with other threads within the same
process.
Threads share resources with other threads in the
same process, such as memory, file handles, and
network connections, which makes them more efficient
than processes. Because threads are lighter weight
than processes, they can be created and destroyed
more quickly, and they can switch between tasks
more rapidly.
Threads in operating systems can be used to improve the
responsiveness and efficiency of multi-tasking applications
by allowing different parts of the program to execute
concurrently. They can also be used to perform
background tasks, such as I/O operations or network
communication, without blocking the main thread or other
threads.
In modern operating systems, threads are managed
by the kernel, which provides services such as
scheduling, synchronization, and communication
between threads. Operating systems use different
thread models, such as the one-to-one model, where each
user-level thread is mapped to a kernel thread, or the
many-to-many model, where multiple user-level threads
can be mapped to a smaller number of kernel threads.
Components of Thread
Threads are a fundamental component of modern
operating systems and programming languages, and they
consist of several key components that enable them to
perform concurrent and parallel processing. The main
components of a thread include:
 Thread ID
 Program counter
 Stack
 Register set
 Thread priority
 Thread state
 Thread-safety
 Synchronization
These components work together to enable threads to
execute concurrently and perform complex tasks in a
parallel and efficient manner.

Needs of Threads
We need threads for several reasons, including:
1. Improved performance: Threads can help improve
the performance of an application by allowing it to
execute multiple tasks concurrently, thereby reducing
the overall processing time.
2. Responsiveness: Threads can help improve the
responsiveness of an application by allowing it to
respond to user input while performing time-
consuming tasks in the background.
3. Resource sharing: Threads can share resources
such as memory, files, and network connections.
4. Modularity: Threads can help improve the
modularity of an application by allowing it to break
complex tasks into smaller, more manageable units of
work that can be executed concurrently.
5. Asynchronous processing: Threads can be used to
perform asynchronous processing, such as handling
input/output operations.
6. Parallel processing: Threads can enable an
application to perform parallel processing.

What is Multithreading?
Multithreading is a programming and operating system
concept where a single process is divided into
multiple threads, each capable of executing
independently. A thread is a lightweight subprocess that
shares the same memory space as other threads within
the same process but runs independently, allowing
parallel execution of tasks. Unlike separate processes,
threads can easily communicate and share data with each
other since they operate within the same address space.
For example, in a web browser, different tabs can load
content simultaneously using separate threads. In MS
Word, one thread might handle text input while another
checks spelling in the background. This concurrent
execution boosts efficiency and enhances user
responsiveness.

Why Multithreading?
1. Enhanced Performance: By utilizing multiple cores,
multithreading allows tasks to be executed in parallel,
speeding up complex computations or data
processing.
2. Responsive Applications: Background operations
(like downloads or file saving) don't freeze the UI,
ensuring a smooth user experience.
3. Efficient Resource Sharing: Threads within the
same process share memory and other system
resources, reducing overhead and improving
performance.
4. Better Utilization of CPU: Idle CPU cycles are
reduced as threads can run concurrently, keeping the
processor engaged.
5. Improved User Experience: Applications can
handle multiple operations—like scrolling, playing
audio, and processing input—simultaneously, making
software feel faster and more interactive.
Process vs Thread
Key Features Processes Threads
A process is an instance A thread is a
of a program that is lightweight process
Definition
being executed by the that is part of a larger
operating system. process.
Using There is a need for more Less resources and
Resources resources and memory. memory are needed.
Higher overhead for both
Lower management
Cost creation and
and creation costs.
management.
Creation More Less
Key Features Processes Threads
Time
IPC methods are Synchronisation
Synchronisat
necessary for utilising shared
ion
synchronisation. memory is simpler.
The processes are Threads share
Isolation
isolated. memory.
Due to higher resource Ideal for multi-core
Scalability
utilisation, less scalable CPUs, more scalable.
One thread issue can
Fault One process failure
impact the entire
Tolerance doesn't affect others.
process.

Types of threads
There are two types of threads in the operating system
1. User-level threads
2. Kernel-level threads
User-level threads
User-level threads are supported above the kernel. They
are managed without kernel support by the run-time
system.
The kernel does not know anything about the user-level
threads.
Advantages of User-level Threads
 User-level threads are very fast and efficient. The
switching between threads takes the same time as a
procedural call.
 Everything can be done without the interference of
OS.
 they are more flexible and can be customized to the
specific needs of an application
Disadvantages of User-level Threads
 OS is unaware of user-level threads, so the scheduler
cannot schedule them properly.
 The entire process will get blocked if one user-level
thread performs a blocking operation.
 that they cannot take advantage of multi-core
processors, as only a single thread can run on a single
core at any given time.
Kernel-level threads
Kernel-level threads are supported and managed by the
operating system.
Kernel-level threads are threads that are managed by the
operating system's kernel, which provides greater control
and visibility into thread execution, but at the cost of
increased overhead and potential scalability issues.
Advantages of Kernel-level Threads
 The kernel is fully aware of kernel-level threads, so
the scheduler handles the process better.
 The kernel can still schedule another thread for
execution if one thread is blocked.
 Kernel-level threads provide better performance
because they are managed directly by the operating
system, which can schedule them more efficiently
than user-level threads.
Disadvantages of Kernel-level Threads
 It is slower to create and manage than user-level
threads and therefore inefficient.
 The overhead associated with kernel-level
threads is that it requires a thread control
block.
 they can lead to resource contention and scheduling
overhead due to their heavy use of system resources.
Advantages of Threading
1. Responsiveness: A multithreaded application
increases responsiveness to the user.
2. Resource Sharing: Resources like code and data are
shared between threads, thus allowing a
multithreaded application to have several threads of
activity within the same address space.
3. Increased concurrency: Threads may be running
parallelly on different processors, increasing
concurrency in a multiprocessor machine.
4. Lesser cost: It costs less to create and context-
switch threads than processes.
5. Lesser context-switch time: Threads take lesser
context-switch time than processes.
Issues with Threading
 Race Conditions: Occur when multiple threads
access shared resources simultaneously, leading to
unpredictable behavior.
 Deadlocks: This happens when two or more threads
wait indefinitely for each other to release resources.
 Synchronization Problems: Arise when threads are
not properly coordinated, causing data inconsistency
or errors.
 Debugging Complexity: Multithreaded code is often
harder to debug and test due to concurrent execution
paths.
 Performance Overhead: Managing multiple threads
can lead to system overhead and degrade
performance.
 Resource Contention: Threads may compete for
system resources, which can slow down execution
and reduce efficiency.
 Increased Design Complexity: Requires careful
planning and robust testing to avoid concurrency-
related issues.

Frequently Asked Questions


Can threads run on any operating?
Threads are a fundamental feature of modern operating
systems and are supported by most operating systems,
including Windows, macOS, Linux, and many others.
Why do we need threads in operating system?
Threads are a fundamental feature of modern operating
systems because they provide a way to achieve
concurrency, which allows multiple tasks to be executed
simultaneously on a single processor or across multiple
processors.
Is threads an essential part of operating system?
Threads are not necessarily an essential part of an
operating system, but they are a fundamental feature that
is widely supported by modern operating systems and
used by many applications.
What resources are used when thread is created?
Because a thread is a component of a process, it does not
consume additional resources when it is created. Instead,
it utilizes the memory space of the process from which it
was generated.
Which threads are recognized by the OS?
Kernel-level threads are recognized by the OS. These
threads are created and managed by the operating
system's kernel, which provides the necessary services
and resources for thread management.

Multicore System consists of two or more processors


which have been attached to a single chip to enhance
performance, reduce power consumption, and more
efficient simultaneous processing of multiple tasks.
Multicore system has been in recent trend where each
core appears as a separate processor. Multicore system
is capable of executing more than one threads parallelly
whereas in Single core system only one thread can
execute at a time.

Implementing Multicore system is more beneficial than


implementing single core system by increasing number
of transistors on single chip to enhance performance
because increasing number of transistors on a single chip
increases complexity of the system.

Challenges of multicore system :


Since multicore system consists of more than one
processors, so the need is to keep all of them busy so
that you can make better use of multiple computing
cores. Scheduling algorithms must be designed to use
multiple computing core to allow parallel computation.
The challenge is also to modify the existing and new
programs that are multithreaded to take advantage of
multicore system.

In general five areas present challenges in programming


for multicore systems :
1. Dividing Activities :
The challenge is to examine the task properly to find
areas that can be divided into separate, concurrent
subtasks that can execute parallelly on individual
processors to make complete use of multiple
computing cores.

2. Balance :
While dividing the task into sub-tasks, equality must
be ensured such that every sub-task should perform
almost equal amount of work. It should not be the
case that one sub task has a lot of work to perform
and other sub tasks have very less to do because in
that case multicore system programming may not
enhance performance compared to single core
system.

3. Data splitting :
Just as the task is divided into smaller sub-tasks, data
accessed and manipulated by that task must also be
divided to run on different cores so that data can be
easily accessible by each sub-tasks.

4. Data dependency :
Since various smaller sub-tasks run on different cores,
it may be possible that one sub-task depends on the
data from another sub tasks. So the data needs to be
examined properly so that the execution of whole
task is synchronized.

5. Testing and Debugging :


When different smaller sub-tasks are executing
parallelly, so testing and debugging such concurrent
tasks is more difficult than testing and debugging
single threaded application.

Multithreading Models

Multithreading is a technique where a process is divided


into smaller execution units called threads that run
concurrently.
 A thread is also called a lightweight process.
Concurrency or Parallelism within a process is
achieved by dividing a process into multiple threads.
 Multithreading improves system performance and
responsiveness by allowing multiple threads to share
CPU, memory and I/O resources of a single process.
 Example: In a browser, each tab can be a thread. In
MS Word, one thread formats text while another
processes inputs.

How are User Threads Mapped with Kernel

Threads?

Multithreading model are of three types.


 Many to many model.
 Many to one model.
 One to one model.

Many to Many Model

In this model, we have multiple user threads multiplex to


same or lesser number of kernel level threads. Number of
kernel level threads are specific to the machine,
advantage of this model is if a user thread is blocked we
can schedule others user thread to other kernel thread.
Thus, System doesn't block if a particular thread is
blocked.
It is the best multi threading model.

Many-to-Many Multithreading Model


Many to One Model

In this model, we have multiple user threads mapped to


one kernel thread. In this model when a user thread
makes a blocking system call entire process blocks. As
we have only one kernel thread and only one user thread
can access kernel at a time, so multiple threads are not
able access multiprocessor at the same time.

The thread management is done on the user level so it is


more efficient.

Many-to-Many Multithreading Model

One to One Model

In this model, one to one relationship between kernel and


user thread. In this model multiple thread can run on
multiple processor. Problem with this model is that
creating a user thread requires the corresponding kernel
thread.

As each user thread is connected to different kernel , if


any user thread makes a blocking system call, the other
user threads won't be blocked.
One-to-One Multithreading Model

Which of the above Model is used in Practice?

One to one This model is usually preferred because:


1. True Parallelism: Since each user thread is backed
by a kernel thread, the operating system can
schedule them on multiple CPUs/cores at the same
time. This means multiple threads from the same
process can actually run in parallel, unlike the many-
to-one model (where all user threads run on a single
kernel thread).
2. Blocking System Calls Don’t Stop Others: In the
many-to-one model, if one user thread makes a
blocking system call (like waiting for I/O), the whole
process is stuck. In the one-to-one model, only that
thread blocks; other threads of the same process
keep running.
3. Better Utilization of Multiprocessor Systems:
Modern systems are multicore. One-to-one allows full
usage of available cores, which improves
performance.
4. OS Support and Management: The operating
system manages kernel threads directly, so
scheduling, context switching, and resource allocation
are efficient and handled at the kernel level.

Is Multithreading Possible Without OS Support?


 Multithreading can be done without OS support, as
seen in Java’s model.
 Threads are managed by the Java Virtual Machine
(JVM), not the OS.
 The application manages thread creation, scheduling,
and context switching using a thread library.
 The OS is unaware of these threads and treats the
process as single-threaded.

Applications

1. Transaction Processing

 Widely used in banking, online payments, and mobile


recharges.
 Multiple transactions can be processed
simultaneously without delays.

2. Web and Internet Applications

 Threads handle multiple client requests at the same


time (e.g., web servers, chat servers).
 Improves responsiveness and efficiency of online
platforms.

3. Banking & Financial Systems


 Online fund transfers, balance updates, and
background verification tasks run in parallel.
 Prevents bottlenecks during high traffic (e.g., during
salary credits or sales).

4. Telecom & Recharge Services

 Multiple recharge or service requests are processed


concurrently.
 Ensures quick response and smooth functioning for
millions of users.

You might also like