Date post: | 03-Jun-2018 |
Category: |
Documents |
Upload: | abbas-baramatiwala |
View: | 220 times |
Download: | 0 times |
of 63
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
1/63
A
Final Project Report
on
Dynamic Scheduler for Multi-Core Processor
Submitted in partial fulfilment of the
requirements for the Degree of
Bachelor of Engineering
in
Information Technology
by
Sachin A. Janani
Balaji B. Ankamwar
Vaijinath T. Jadhav
Abbas A. Baramatiwala
Under the guidance of
Prof. Dinesh A. Zende
Department of Information Technology
Vidya Pratishthans College of Engineering
Baramati 413133, Dist- Pune (M.S.)
INDIA
April 2012
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
2/63
VPCOE, Baramati
Department of Information Technology
Certificate
This is to certify that the dissertation entitled
Dynamic Scheduler for Multi-Core Processor
submitted by
Sachin A. Janani
Balaji B. Ankamwar
Vaijinath T. Jadhav
Abbas A. Baramatiwala
is a record of bona-fide work carried out by them, in the partial
fulfillment of the requirement for the award of Degree of Bachelor
of Engineering in Information Technology at Vidya Pratishthans
College of Engineering, Baramati under the University of Pune. This
work is done during year 2011-12, under our guidance.
Prof. Dinesh A. Zende Prof. S. A. Takale Dr. S. B. Deosarkar
Assistant Professor Head of Dept. Principal
Examiner 1: Examiner 2:
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
3/63
Acknowledgements
It was highly eventful at the department of Information Technology, Vidya Prat-
ishthans college of Engineering. Working with highly devoted professor community with
remains most memorable experience of our life. Hence this acknowledgement is hum-
ble attempt to honestly thank all those who were directly or indirectly involved in our
project and were of immense help to us.
We would personally like to thankProf. S. A. Takale, HOD of Information Tech-
nology department who, with such undying interest reviewed and enclosed this project
report. We take this opportunity to thank respectedProf. Dinesh A. Zende , our
project guide for his generous assistance.Lastly, We would like to thank our Principal Dr. S. B. Deosarkarwho created
a healthy environment for all of us to learn in best possible way.
Janani Sachin A.
Ankamwar Balaji B.
Jadhav Vaijnath T.
Baramatiwala Abbas A.
i
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
4/63
Abstract
Many dynamic scheduling algorithms have been proposed in the past. With the
advent of multi core processors, there is a need to schedule multiple tasks on multiple
cores. The scheduling algorithm needs to utilize all the available cores efficiently. The
multicore processors may be SMPs or AMPs with shared memory architecture. In this,
we propose a dynamic scheduling algorithm in which the scheduler resides on all cores
of a multi-core processor and accesses a shared Task Data Structure (TDS) to pick up
ready-to-execute tasks. This method is unique in the sense that the processor has the
onus of picking up tasks whenever it is idle. We have discussed the proposed scheduling
algorithm using a set of tasks as an example.
Also High performance on multicore processors requires that schedulers be rein-
vented. Traditional schedulers focus on keeping execution units busy by assigning each
core a thread to run. Schedulers ought to focus, however, on high utilization of on-chip
memory, rather than of exe- cution cores, to reduce the impact of expensive DRAM and
remote cache accesses. A challenge in achieving good use of on-chip memory is that the
memory is split up among the cores in the form of many small caches. This scheduling
that assigns each object and its operations to a specific core, moving a thread among
the cores as it uses different objects.
ii
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
5/63
Contents
Acknowledgements i
Abstract ii
Keywords vii
Notation and Abbreviations viii
1 Introduction 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Related Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Literature Survey 52.1 Need of the topic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Proposed Work 73.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.2 Project Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.3 Project Ob jectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.4 Project Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4 Research Methodology 9
5 Project Design 125.1 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.2 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.3 Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.4 Data Flow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.5 Project Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.6 UML Documentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6 System Implementations 256.1 Important Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.2 Important Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.3 Important Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 38
iii
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
6/63
CONTENTS CONTENTS
7 System Testing 40
8 Experimental Results 41
9 Conclusion 44
10 Future Scope 45
A Appendix 46
References 51
iv
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
7/63
List of Tables
5.1 Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.1 Test Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.1 Dependency Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
v
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
8/63
List of Figures
5.1 Editing the GRUB 2 Menu During Boot . . . . . . . . . . . . . . . . . . 155.2 DFD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.3 Gantt Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.4 Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5 Flow Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.6 Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.1 Dynamic sheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.2 Task Data Structure(TDS) . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.1 Simulation Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.1 Menuconfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48A.2 menu.lst file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
vi
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
9/63
Keywords
List of keywords-
Dynamic scheduler; multi-core systems; load balancing; work load dis-
tribution; affinity scheduling; thread migration; thread scheduling
vii
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
10/63
Notation and Abbreviations
SMP-Symmetric Multiprocessor
AMP-Asymmetric Multiprocessor
TDS-Task Data Structure
SCU-Software Cache Unification
NUMA-Non-Uniform Memory Access
UMA-Uniform Memory Access
MPSoC-Multi-Processor System on Chip
DSM-distributed shared memory
viii
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
11/63
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
12/63
1.2. MOTIVATION CHAPTER 1. INTRODUCTION
subtasks. This demands a scheduling algorithm that can be efficient enough to exploit
the multicore architecture to achieve an optimal schedule in terms of time of execution
and processor utilization.
1.2 Motivation
With the emergence of multicore chips, future distributed shared memory (DSM)
systems will have less powerful processor cores but will have tens of thousands of cores.
Performance asymmetry in multicore platforms is another trend due to budget issues
such as power consumption and area limitation as well as various degrees of parallelism
in different applications [5]. We call such a system heterogeneous manycore DSM system.
Processor cores belonging to the same level (e.g., same chip or board) frequently share
memory resources. For instance, cores on the same chip may share an L2 or L3 cache.
The shared-memory programming model is capable of attaining the benefits of
largescale parallel computing without surrendering much programmability [8]. Using the
shared-memory model, a program can be written as if it were running on a large processor
count SMP machine. From the perspective of application developers, all processors
provide identical performance and the memory access time from each processor is
also uniform. This model has been widely accepted and used for a long time. Now if
we compare the real architecture and the vision of the architecture from the developers
angle, there is a big gap between them. A number of long-standing assumptions are
broken.
Instead of a uniform memory access time, there are various memory latencies. The
immediate result is that placing threads on arbitrary processors may lead to sub-
optimal performance when there are data accessed in common by threads.
Heterogeneous cores provide different compute powers. Developers still should be
able to write portable programs regardless of different machines.
When the number of user-level threads is greater than the number of kernel threads,
affinity based thread scheduling must be taken into account to maximize the pro-
gram locality.
Dynamic Scheduler for Multi-Core Processor 2 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
13/63
1.3. RELATED THEORY CHAPTER 1. INTRODUCTION
If a number of cores share a certain level of cache, problems may arise due to resource
contention.
We hope to find a method to reschedule threads to close the above gap and improve
the multithreaded programs performance. The scheduling method should be automatic
and applicable to a variety of general-purpose programs.
Another issue is that multicore chips consist of relatively simple processor cores and
will be underutilized if user programs cannot provide sufficient thread level parallelism.
It is the developers responsibility to write high performance parallel software to fully
utilize the processor cores. To achieve high performance, we believe that the new parallel
multicore software should have the following two characteristics:
Fine grain threads. We need a high degree of parallelism to keep every processor
core busy. Another reason is that a core often has a small-size cache or scratch
buffer to work on, which requires developers decompose a task into smaller tasks.
Asynchronous program execution. When there are many processor cores, the pres-
ence of a synchronization point can seriously affect the program performance. And
eliminating unnecessary synchronization points can increase the degree of paral-
lelism accordingly.
Therefore, we want to adopt the current scheduling approach to designing new
dynamic scheduler for multicore architectures. The dynamic scheduling approach places
fine grain computational tasks in a directed acyclic graph and schedules them dynami-
cally depending on data dependence, program locality, and critical path.
1.3 Related Theory
Since IBM released Power4 (dual cores) in 2001 and Sun Microsystems released
Ultra- SPARC T1 (eight cores) in 2005, there are great numbers of multicore chips
implemented by various vendors [6]. Traditional microarchitectures typically relies on
increasing the complexity of the logic, wire, and design to find more Instruction Level
Dynamic Scheduler for Multi-Core Processor 3 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
14/63
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
15/63
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
16/63
2.1. NEED OF THE TOPIC CHAPTER 2. LITERATURE SURVEY
prediction according to the history record of task scheduling. It then rearranges a long
task into smaller subtasks to form another task state graph and then schedule them in
parallel. Blagojevic et al examine user-level schedulers that dynamically right sizes thedimensions and degrees of parallelism on the cell broadband engine. Blagojevic et al
mention a new method using sampling of dominant execution phases to converge to the
optimal scheduling algorithm.
2.1 Need of the topic
The main objective of the project dynamically load balancing the arrived task on
multiple cores of the system and to investigate how to effectively schedule threads to
improve program performance on multicore architectures. This formulates the affinity-
based thread scheduling problem on shared-memory multicore systems and proposes a
static feedback-directed approach to computing optimized thread schedules to improve
the effectiveness on every level of a complex memory hierarchy while keeping load balance.
The dissertation also studies the dynamic data-availability driven scheduling approach
for fine grain parallel programs and demonstrates the scalability and practicality of the
approach on both shared-memory and distributed-memory multicore systems.
Dynamic Scheduler for Multi-Core Processor 6 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
17/63
Chapter 3
Proposed Work
3.1 Problem Definition
We are going to develope the Dynamic Scheduler For Multicore Processor Sys-
tem,In this, we propose a dynamic scheduling algorithm in which the scheduler resides
on all cores of a multi-core processor and accesses a shared Task Data Structure (TDS)
to pick up ready-to-execute tasks.
3.2 Project Scope
Uptil now the number of apllications are developed but these applications are run
efficiently on the single core processor system but these applications does not efficiently
run on the multicore systems that means these applications does not utilize all cores of
the processor equally, generally applications run on first core but another cores does not
get utilized while first core is get burdened by assigning task.
3.3 Project Objectives
Multi-core processors have two or more processing elements or cores on a single
chip. These cores could be of similar architecture (Synchronous Multicore Processors,
SMPs) or of different architecture (Asynchronous Multicore Processors, AMPs). All
the cores necessarily use shared memory architecture. Multicore processors have existed
7
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
18/63
3.4. PROJECT CONSTRAINTS CHAPTER 3. PROPOSED WORK
previously in the form of MPSoC (Multi-Processor System on Chip) but they were limited
to a segment of applications such as networking. The easy availability of multicore has
forced software programmers to change the way they think and write their applications.Unfortunately, the applications written so far are sequential in nature. We can extract the
inherent parallelism in such applications to exploit the available multi core architecture.
To do so, conversion of sequential code to parallel code or writing parallel applications
from scratch may not alone solve the problem optimally. There is a definite need for
scheduling algorithms suitable for shared memory architecture to increase the efficiency
of multi-core processors in presence of multiple tasks within an application. Most of the
proposed scheduling algorithms for multi-core processors concentrate on scheduling tasks
that are independent of each other. This means that execution of one task does not affect
or is not dependent on the result of other tasks and they may execute concurrently.
To utilize multi-core processors more efficiently for embedded applications where
only one single application executes at any time, the application should be divided into
subtasks. This demands a scheduling algorithm that can be efficient enough to exploit
the multicore architecture to achieve an optimal schedule in terms of time of execution
and processor utilization.
3.4 Project Constraints
Dynamic Multicore exceeds performance expectation in some workloads on mul-
ticore systems. But it still shows some weakness in other workloads. There are some
weakness about irresponsiveness of Dynamic Multicore scheduler in 3D game area.
In the current implemented Dynamic Scheduling policy we can not handle the
deadlock occured while task scheduling and balancing the task on multiple cores.
Dynamic Scheduler for Multi-Core Processor 8 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
19/63
Chapter 4
Research Methodology
With the emergence of multicore chips, future distributed shared memory (DSM)
systems will have less powerful processor cores but will have tens of thousands of cores.
Performance asymmetry in multicore platforms is another trend due to budget issues
such as power consumption and area limitation as well as various degrees of parallelism
in different applications [Balakrishnan et al., 2005, Kumar et al., 2004, Kumar et al.,
2006]. We call such a system heterogeneous manycore DSM system. Processor cores be-
longing to the same level (e.g., same chip or board) frequently share memory resources.
For instance, cores on the same chip may share an L2 or L3 cache. The shared-memory
programming model is capable of attaining the benefits of largescale parallel computing
without surrendering much programmability [Lu et al., 1995]. Using the shared-memory
model, a program can be written as if it were running on a large processor count SMP
machine. From the perspective of application developers, all processors provide identi-
cal performance and the memory access time from each processor is also uniform. This
model has been widely accepted and used for a long time. Now if we compare the real
architecture and the vision of the architecture from the developers angle, there is a big
gap between them. A number of long-standing assumptions are broken.
We hope to find a method to reschedule threads to close the above gap and improve
the multithreaded programs performance. The scheduling method should be automatic
and applicable to a variety of general-purpose programs. Another issue is that multi-
core chips consist of relatively simple processor cores and will be underutilized if user
9
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
20/63
CHAPTER 4. RESEARCH METHODOLOGY
programs cannot provide sufficient thread level parallelism. It is the developers respon-
sibility to write high performance parallel software to fully utilize the processor cores.
To achieve high performance, we believe that the new parallel multicore software shouldhave the following two characteristics:
1. Fine grain threads. We need a high degree of parallelism to keep every processor
core busy. Another reason is that a core often has a small-size cache or scratch
buffer to work on, which requires developers decompose a task into smaller tasks.
2. Asynchronous program execution. When there are many processor cores, the pres-
ence of a synchronization point can seriously affect the program performance. And
eliminating unnecessary synchronization points can increase the degree of paral-
lelism accordingly.
Therefore, we want to adopt the current scheduling approach to designing new
dynamic scheduler for multicore architectures. The dynamic scheduling approach places
fine grain computational tasks in a directed acyclic graph and schedules them dynami-
cally depending on data dependence, program locality, and critical path.
The most significant change in 2.6 Linux Kernel which improved scalability in
multi processor system was in the kernel process scheduler. The design of Linux 2.6
scheduler is based on per cpu runqueues and priority arrays, which allow the scheduler
perform its tasks in O(1) time. This mechanism solved many scalability issues but the
scheduler still didnt perform as expected on Hyperthreaded systems and on higher end
NUMA systems. In case of Hyper-threading, more than one logical CPU shares the pro-
cessor resources, cache and memory hierarchy. And in case of NUMA, different nodes
have different access latencies to the memory. These non uniform relationships betweenthe CPUs in the system pose significant challenge to the scheduler. Scheduler must be
aware of these differences and the load distribution needs to be done accordingly.
To address this, 2.6 Linux kernel scheduler introduced a concept called scheduling
domains [SD]. 2.6 Linux kernel used hierarchical scheduler domains constructed dynam-
ically depending on the CPU topology in the system. Each scheduler domain contains a
list of scheduler groups having a common property. Load balancer runs at each domain
Dynamic Scheduler for Multi-Core Processor 10 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
21/63
CHAPTER 4. RESEARCH METHODOLOGY
level and scheduling decisions happen between the scheduling groups in that domain.
On a high end NUMA system with processors capable of Hyper-threading, there will be
three scheduling domains, one each for HT, SMP and NUMA.In the presence of Hyperthreading, when the system has fewer tasks compared to
number of logical CPUs in the system, scheduler must distribute the load uniformly be-
tween the physical packages. This distribution will avoid scenarios in the system where
one physical package has more than one logical CPU busy and another physical pack-
age is completely idle. Uniform load distribution between physical packages will lead to
lower resource contention and higher throughput. Presence of Hyperthreading scheduler
domain will help the scheduler achieve the equal load distribution between the physical
packages.
Similarly the NUMA scheduling domain will help in unnecessary task migration
from one node to another. This will ensure that the tasks will stay most of the time in
their home (where the task has allocated most of its memory) node.
Dynamic Scheduler for Multi-Core Processor 11 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
22/63
Chapter 5
Project Design
5.1 Hardware Requirements
1. Processor: Multi-Core Processors
2. 256MB RAM.
5.2 Software Requirements
Operating System:
Ubuntu xx.xx.xx(Any Linux OS)
Application Software:
1. HackBench
2. GCC Compiler
3. GTK
4. GEdit
5. Latest Kernel
5.3 Risk Analysis
While Developing and installing kernel with the dynamic scheduler in the current
Linux Operating System number of problems are occurs but that can be solved.
12
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
23/63
5.3. RISK ANALYSIS CHAPTER 5. PROJECT DESIGN
How To Enable Root User ( Super User ) in Ubuntu
By default, root account password is locked in Ubuntu. While compiling the new
kernel the default linux directory containing all .o files are created in the /usr/src direc-tory. But the ordinary user accounts has not permissions to it. So, when you do su -,
youll get Authentication failure error message as shown below.
$ su
Password:
Su: Authentication failure.
First,unlock the root user and set a password for root user as shown below.
$ sudopasswd root
[ sudo ]password for project:
Enter new Unix password:
Retype new Unix password:
Password :password updated successfully.
How do I update Ubuntu Linux softwares?
In newely installed Linux OS there is no guarantee that all necessary packages are
installed in it, for installing new kernel requires some special packages like ncurces,gtk,qtk,gcc,mak
etc. so it needs to add these packages before starting actual task, this can be done by
updating and upgrading the system.It can be upgraded using GUI tools or using tradi-
tional command line tools.
Using apt-get command line tool
apt-get update :
Update is used to resynchronize the package index files from their sources via In-
ternet.
apt-get upgrade :
Upgrade is used to install the newest versions of all packages currently installed on
the system
Dynamic Scheduler for Multi-Core Processor 13 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
24/63
5.3. RISK ANALYSIS CHAPTER 5. PROJECT DESIGN
apt-get install package-name :
install is followed by one or more packages desired for installation.
If package is already installed it will try to update to latest version.
$ sudo apt-get install update
$sudo apt-get install update && sudo apt-get install upgrades
Reading packages lists done
Building dependency tree
Reading state information done
E:Unable to locate packages update
If all the packags updates scessefully then Ubuntu compilation will be done
easily.
Editing the GRUB 2 Menu During Boot
After completing task we need to reboot the system to feel the new look of the
new kernel ,but some times there is problems are occurs in loading the system. This
problem is occurs due problems in menu.lst files. This problem can be solved by editingthis file at the boot time by using following steps
If the menu is displayed, the automatic countdown may be stopped by pressing any
key other than the ENTER key.
If the menu is not normally displayed during boot, hold down the SHIFT key as
the computer attempts to boot to display the GRUB 2 menu.
In certain circumstances, if holding the SHIFT key method does not display the
menu pressing the ESC key repeatedly may display the menu. The user can edit
entries in the GRUB 2 menu using the following instructions:
With the menu displayed, press any key (except ENTER) to halt the countdown
timer and select the desired entry with the up/down arrow keys.
Press the e key to reveal the selections settings.
Dynamic Scheduler for Multi-Core Processor 14 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
25/63
5.3. RISK ANALYSIS CHAPTER 5. PROJECT DESIGN
Figure 5.1: Editing the GRUB 2 Menu During Boot
Use the keyboard to position the cursor. In this example, the cursor has been moved
so the user can change or delete the numeral 9.
Make a single or numerous changes to any or every line. Do not use ENTER to
move between lines.
Tab completion is available, which is especially useful in entering kernel and initrd
entries.
When complete, determine the next step:
CTRL-X - boot with the changed settings (highlighted for emphasis).
C - go to the command line to perform diagnostics, load modules, change settings,
etc.
ESC - Discard all changes and return to the main menu.
The choices are listed at the bottom of the screen as a reminder.
Edits made to the menu in this manner are non-persistent. They remain in effect
only for the current boot.
The changes must be re-entered on the next boot.
Dynamic Scheduler for Multi-Core Processor 15 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
26/63
5.3. RISK ANALYSIS CHAPTER 5. PROJECT DESIGN
Once successfully booted, the changes can be made permanent by editing the ap-
propriate file, saving the file, and running update-grub as root.
Dynamic Scheduler for Multi-Core Processor 16 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
27/63
5.4. DATA FLOW DIAGRAMS CHAPTER 5. PROJECT DESIGN
5.4 Data Flow Diagrams
Figure 5.2: DFD
Dynamic Scheduler for Multi-Core Processor 17 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
28/63
5.5. PROJECT SCHEDULES CHAPTER 5. PROJECT DESIGN
5.5 Project Schedules
Table 5.1: Schedule
# Tasks Days Start Finish Assignments
T1 Beginning Phase 12 06-10-2011 21-10-2011 Sachin Janani, Abbas
Baramatiwala
T1.1 In this phase we actuallytrack out why the need ofthese project
1 06-10-2011 06-10-2011
T1.2 Establish Pro ject Scope 1 07-10-2011 07-10-2011T1.3 Establish Pro ject Scope 1 10-10-2011 10-10-2011T1.4 Create Test Plan 1 11-10-2011 11-10-2011
T1.5 Create ManufacturingPlan 1 12-10-2011 12-10-2011
T1.6 Establish Engineering Re-quirements
1 13-10-2011 13-10-2011
T1.7 Establish Communica-tions
1 14-10-2011 14-10-2011
T1.8 Establish Pro ject Goals 1 17-10-2011 17-10-2011T1.9 Staff Project 1 18-10-2011 18-10-2011T1.10 Establish Training Re-
quirements1 19-10-2011 19-10-2011
T1.11 Establish Engineering Re-quirements
1 20-10-2011 20-10-2011
T1.12 Establish Communica-
tions
1 21-10-2011 21-10-2011
T2 Analysis Phase 11 31-10-2011 14-11-2011 Sachin Janani, Abbas
Baramatiwala
T2.1 In these we analyse vari-ous constraints involved inproject development
5 31-10-2011 04-11-2011
T2.1 Develop Project Specifica-tions
6 01-11-2011 08-11-2011
T2.1 Develop Initial Documen-tation
1 02-11-2011 02-11-2011
T2.1 Conduct User Training 4 03-11-2011 08-11-2011T2.1 Create Manufacturing
Plan5 04-11-2011 10-11-2011
T2.1 Create Marketing Plan 6 07-11-2011 14-11-2011
Dynamic Scheduler for Multi-Core Processor 18 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
29/63
5.5. PROJECT SCHEDULES CHAPTER 5. PROJECT DESIGN
T3 Design Phase 14 08-11-2011 25-11-2011 Abbas Baramatiwala,
Sachin Janani
T3.1 Actual designing of project takes place
14 08-11-2011 25-11-2011
T3.2 Develop Prototype 0 09-11-2011 09-11-2011T4 Estimating Phase 12 10-11-2011 25-11-2011 Vaijnath Jadhav, Bal-
aji Ankamwar
T4.1 In these phase we actuallyvarious estimation like ef-fort,time,cost etc
12 10-11-2011 25-11-2011
T4.2 Estimate Costs, Savingsand /or Revenues
0 11-11-2011 11-11-2011
T5 Coding Phase 50 14-11-2011 10-01-2012 Sachin Janani, Abbas
Baramatiwala, Vaij-
nath Jadhav, BalajiAnkamwar
T5.1 Actual development of project takes in theseproject
50 14-11-2011 10-01-2012
T5.2 To schedule the processeswe first calculate thethreads in the processwhich are independent
2 15-11-2011 16-11-2011
T5.3 Complete Open Items 1 16-11-2011 16-11-2011T5.4 Run Performance Tests 1 17-11-2011 17-11-2011T5.5 Develop Prototype 1 18-11-2011 18-11-2011T6 Debugging Phase 46 21-11-2011 23-01-2012 Sachin Janani, Abbas
Baramatiwala, Vaij-nath Jadhav, Balaji
Ankamwar
T6.1 In these phase the bugs inthe project are removed
1 21-11-2011 21-11-2011
T6.2 Finalize Testing 1 23-11-2011 23-11-2011T6.3 Correct Problems 43 24-11-2011 23-01-2012T6.4 Conduct Alpha Testing 1 25-11-2011 25-11-2011
Dynamic Scheduler for Multi-Core Processor 19 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
30/63
5.5. PROJECT SCHEDULES CHAPTER 5. PROJECT DESIGN
T7 Maintenance Phase 4 20-02-2012 23-02-2012 Sachin Janani, Abbas
Baramatiwala, Vaij-
nath Jadhav, BalajiAnkamwar
T7.1 After alpha and beta test-ing the actual mainte-nance of software takesplace
1 20-02-2012 20-02-2012
T7.2 Evaluate Systems 2 20-02-2012 21-02-2012T7.3 Conduct Beta Testing 3 21-02-2012 23-02-2012
Dynamic Scheduler for Multi-Core Processor 20 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
31/63
5.5. PROJECT SCHEDULES CHAPTER 5. PROJECT DESIGN
Figure 5.3: Gantt Chart
Dynamic Scheduler for Multi-Core Processor 21 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
32/63
5.6. UML DOCUMENTATIONS CHAPTER 5. PROJECT DESIGN
5.6 UML Documentations
Figure 5.4: Use Case
Dynamic Scheduler for Multi-Core Processor 22 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
33/63
5.6. UML DOCUMENTATIONS CHAPTER 5. PROJECT DESIGN
Figure 5.5: Flow Chart
Dynamic Scheduler for Multi-Core Processor 23 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
34/63
5.6. UML DOCUMENTATIONS CHAPTER 5. PROJECT DESIGN
Figure 5.6: Sequence Diagram
Dynamic Scheduler for Multi-Core Processor 24 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
35/63
Chapter 6
System Implementations
6.1 Important Functions
As the scheduler implemented is for multi-core processor it should also be com-
patible with the unicore processor so we have to write the scheduler code in conditional
compilation statement i.e
#ifdef CONFIG_SMP
-----
------
#endif
Functions to be used for handling various challenges in scheduling
1. void wait_task_inactive(task_t * p)
{
unsigned long flags;
25
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
36/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
runqueue_t *rq;
repeat:
rq = task_rq(p);
while (unlikely(rq->curr == p))
{
cpu_relax();
barrier();
}
rq = lock_task_rq(p, &flags);
if (unlikely(rq->curr== p))
{
unlock_task_rq(rq, &flags);
goto repeat;
}
unlock_task_rq(rq,&flags);
}
This function is generally used for SMP or multicore scheduling.This function
wait for a process to unschedule. This is used by the exit() and ptrace() code
2. static int try_to_wake_up(task_t* p, int synchronous)
{
unsigned long flags;
int success = 0;
runqueue_t*rq;
rq = lock_task_rq(p, flags);
p->state = TASK_RUNNING;
if (!p->array)
{
activate_task(p, rq);
Dynamic Scheduler for Multi-Core Processor 26 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
37/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
if ((rq->curr == rq->idle) || (p->prio < rq->curr->prio))
resched_task(rq->curr);
success = 1;
}
unlock_task_rq(rq,&flags);
return success;
}
This function wake up a process.The working of this function is as follows:-
Put a process on runqueue if its not already there. The current process is always
on the run-queue (except when the actual re-schedule is in progress), and as suchyoure allowed to do the simpler current->state = TASK RUNNING to mark your-
self runnable without the overhead of this.
3. int wake_up_process(task_t * p)
{
return try_to_wake_up(p,0);
}
This function calls the above try to wake up function
4. void sched_task_migrated(task_t *new_task)
{
wait_task_inactive(new_task);
new_task->cpu = smp_processor_id();
wake_up_process(new_task);
}
This function is generally used by SMP message passing mechanism or code
whenever the new task is arrived to the target CPU. We move to the new task
into local runqueue so for this migration of the task we use the above function.
This function must be called with interrupts disabled. The above function works as
Dynamic Scheduler for Multi-Core Processor 27 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
38/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
follows:-
a) The new task first waits for the old task to unscheduled by function
wait task inactive() which is explained in above points.b) Assign a CPU to the new task using the statement new task->cpu=smp processor id
c) After assigning the CPU wate up the new process using function wake up process(ne
5. void kick_if_running(task_t * p)
{
if (p == task_rq(p)->curr)
resched_task(p);
}
This function is used to signal the CPU in the case if CPU is trying to execute
a process that is currently running on other CPU . Kick the remote CPU if the task
is running currently, this code is used by the signal code to signal tasks which are
in user-mode as quickly as possible. (Note that we do this lockless - if the task does
anything while the message is in flight then it will notice the sigpending condition
anyway.)
6. static inline unsigned int double_lock_balance(runqueue_t *this_rq,
runqueue_t *busiest, int this_cpu, int idle, unsigned int nr_running)
{
if (unlikely(!spin_trylock(&busiest->lock)))
{
if (busiest < this_rq){
spin_unlock(&this_rq->lock);
spin_lock(&busiest->lock);
spin_lock(&this_rq->lock);
/* Need to recalculate nr_running*/
if (idle || (this_rq->nr_running >this_rq->prev_nr_running[this_cpu]))
Dynamic Scheduler for Multi-Core Processor 28 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
39/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
nr_running = this_rq->nr_running;
else
nr_running = this_rq->prev_nr_running[this_cpu];
}
else
spin_lock(&busiest->lock);
}
return nr_running;
}
Lock the busiest runqueue as well, this rq is locked already. Recalculatenr running if we have to drop the runqueue lock.
7. static void load_balance(runqueue_t *this_rq, int idle)
{
int imbalance, nr_running, load, max_load, idx, i, this_cpu = smp_proce
task_t *next = this_rq->idle, *tmp;
runqueue_t *busiest,*rq_src;
prio_array_t *array;
list_t *head, *curr;
/*
* We search all runqueues to find the most busy one.
* We do this lockless to reduce cache-bouncing overhead,
* we re-check the best source CPU later on again, with
* the lock held.
*
* We fend off statistical in runqueue lengths by
* saving the unqueue length during the previous load-balancing
* operation and using the smaller one the current and saved lengths.
Dynamic Scheduler for Multi-Core Processor 29 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
40/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
* If a runqueue is long enough for a longer amount of time then
* we recognize it and pull tasks from it.
*
* The current runqueue length is a statistical maximum variable,
* for that one we take the longer one - to avoid fluctuations in
* the other direction. So for a load-balance to happen it needs
* stable long runqueue on the target CPU and stable short runqueue
* on the local runqueue.
*
* We make an exception if this CPU is about to become idle - in
* that case we are less picky about moving a task across CPUs and
* take what can be taken.
*/
if (idle || (this_rq->nr_running >this_rq->prev_nr_running[this_cpu]))
nr_running = this_rq->nr_running;
else
nr_running = this_rq->prev_nr_running[this_cpu];
busiest = NULL;
max_load = 1;
for (i = 0; i < smp_num_cpus; i++)
{
rq_src = cpu_rq(cpu_logical_map(i));
if (idle || (rq_src->nr_running < this_rq->prev_nr_running[i]))
load = rq_src->nr_running;
else
load = this_rq->prev_nr_running[i];
this_rq->prev_nr_running[i]= rq_src->nr_running;
if ((load > max_load) && (rq_src != this_rq))
Dynamic Scheduler for Multi-Core Processor 30 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
41/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
{
busiest = rq_src;
max_load = load;
}
}
if (likely(!busiest))
return;
imbalance = (max_load - nr_running) / 2;
/*
*It needs an at least ~25% imbalance to trigger
*balancing.
*/
if (!idle && (imbalance < (max_load + 3)/4))
return;
nr_running = double_lock_balance(this_rq, busiest, this_cpu,idle, nr_running
/*
* Make sure nothing changed since we checked the
* runqueue length.
*/
if (busiest->nr_running nr_running+ 1)
goto out_unlock;
/*
* We first consider expired tasks. Those will likely not be
* executed in the near future, and they are most likely to
* be cache-cold, thus switching CPUs has the least effect
* on them.
*/
if (busiest->expired->nr_active)
Dynamic Scheduler for Multi-Core Processor 31 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
42/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
array = busiest->expired;
else
array = busiest->active;
new_array:
/*
* Load-balancing does not affect RT tasks, so we start the
* searching at priority 128.
*/
idx = MAX_RT_PRIO;
skip_bitmap:
idx = find_next_bit(array->bitmap, MAX_PRIO, idx);
if (idx == MAX_PRIO)
{
if (array == busiest->expired)
{
array = busiest->active;
goto new_array;
}
goto out_unlock; \
}
head = array->queue + idx;
curr = head->prev;
skip_queue:
tmp = list_entry(curr,task_t, run_list);
/*
* We do not migrate tasks that are:
* 1) running (obviously),or
* 2) cannot be migrated to this CPU due to cpus_allowed, or
* 3) are cache-hot on their current CPU.
Dynamic Scheduler for Multi-Core Processor 32 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
43/63
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
44/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
array = busiest->active;
goto new_array;
}
}
out_unlock: spin_unlock(&busiest->lock);
}
This function is used for load balancing in case of overloading on a single
processor. The tasks from busiest runqueue are pulled out and are put in the short
runqueue .If there are task that are ready the it is feasible to take that task instead
of migrating the task from the busiest runqueue.
8. Main Schedule function
void scheduling_functions_start_here(void) { }
This Function is empty to help the developer to decide where to start wirting
the scheduler asmlinkage
void schedule(void)
{
task_t *prev = current,*next;
runqueue_t *rq = this_rq();
prio_array_t *array;
list_t *queue;
int idx;
if (unlikely(in_interrupt()))
BUG();
release_kernel_lock(prev,smp_processor_id());
spin_lock_irq(&rq->lock);
Dynamic Scheduler for Multi-Core Processor 34 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
45/63
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
46/63
6.1. IMPORTANT FUNCTIONS CHAPTER 6. SYSTEM IMPLEMENTATIONS
goto switch_tasks;
}
array = rq->active;
if (unlikely(!array->nr_active))
{
/*
* Switch the active and expired arrays.
*/
rq->active = rq->expired;
rq->expired = array;
array = rq->active;
rq->expired_timestamp = 0;
}
idx = sched_find_first_bit(array->bitmap);
queue = array->queue+ idx;
next = list_entry(queue->next, task_t, run_list);
switch_tasks: prefetch(next);
prev->work.need_resched = 0;
if (likely(prev != next))
{
rq->nr_switches++;
rq->curr = next;
context_switch(prev, next);
/*
* The runqueue pointer might be from another CPU
* if the new task was last running on a different
* CPU - thus re-load it.
*/
barrier();
Dynamic Scheduler for Multi-Core Processor 36 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
47/63
6.2. IMPORTANT ALGORITHMS CHAPTER 6. SYSTEM IMPLEMENTATIONS
rq = this_rq();
}
spin_unlock_irq(&rq->lock);
reacquire_kernel_lock(current);
return;
}
6.2 Important Algorithms
1.Algorithm for process execution
1.Start
2.Repeat step 3-6
3.If new process arrives calculate process dependencies
4.Recalculate process priorities depending of thr number of dependencies
5.Take the process for execution
6.Execute the process
7.Stop
2.Algo for process execution
1.Start
2.for each CPU tick
resolve the dependencies of process
Mark the resolve process as ready
recalculate the process priorities
3.If burst time of process>
0goto step 2
4.Stop
Dynamic Scheduler for Multi-Core Processor 37 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
48/63
6.3. IMPORTANT DATA STRUCTURE CHAPTER 6. SYSTEM IMPLEMENTATIONS
6.3 Important Data Structure
1. struct runqueue
{
spinlock_t lock;
unsigned long nr_running, nr_switches, expired_timestamp;
task_t *curr, *idle;
prio_array_t *active, *expired, arrays[2];
int prev_nr_running[NR_CPUS];
}cacheline_aligned;
This structure is used to create the runqueue for each CPU or core in the
system.Some places requires to lock multiple runqueues lock acquire operations must
be ordered by ascending &runqueue.
2. The scheduler will reside in the shared memory of the multi-core system. This
ensures that all the cores share the scheduler code. The same scheduler code willbe executing on different cores and we will maintain a shared task data structure
(TDS) that contains task information. The TDS stores information such as status,
list of dependent tasks, data and stack pointers, etc. The detailed description of
the TDS is as shown in Figure . The scheduler program executing on different cores
(scheduler instances) will share this TDS. The access to this TDS is exclusive to
each scheduler program. Exclusivity is achieved through the use of locking mecha-
nism such as locks or semaphores.
The scheduler executes on each individual core as a separate thread or in-
stance. Whenever a core is idle, the scheduler thread is invoked and it checks the
shared TDS for the available list of ready-to-execute tasks. The shared TDS will
have elements as shown in Figure 4.2
Dynamic Scheduler for Multi-Core Processor 38 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
49/63
6.3. IMPORTANT DATA STRUCTURE CHAPTER 6. SYSTEM IMPLEMENTATIONS
Figure 6.1: Dynamic sheduler
Figure 6.2: Task Data Structure(TDS)
Definitions
Ti - Task ID
Tis - Task status of Ti
Tid - Number of dependencies that should be resolved to start execution of Ti
Tia(n) - List of tasks that become available due to execution of task Ti
Tip - Priority number of tasks that become available due to execution of task Ti
Tidp - Pointer to data required for executing task Ti
Tisp - Stack pointer
Tix - Execution time for Ti
Dynamic Scheduler for Multi-Core Processor 39 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
50/63
Chapter 7
System Testing
Table 7.1: Test Case
Test case name Operation Expected Output Result
Checking thethrouput of schedu-lar
Check the number of processthat the scheduler can properlybalanced on the core.
We are assuming that the sched-uler can handle 100 processes
Pass
Checking the Intel-ligence of schedular
Check to see how the schedulerintelligently split the large pro-
cess, and if process is small towhich core it assign process.
If the process is of very shortthen it can be assigned to any
free core directly, but if the pro-cess is large then the process isdivided into small threads andeach thread assigned to differentcore.
Pass
Checking The per-formance of schedu-lar.
This test cases is used to checkthe total performance of thescheduler in term of speedup ap-plication of and level of paral-lism.
Speedup should be greater than1.5.
Pass
40
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
51/63
Chapter 8
Experimental Results
We discuss the proposed scheduling algorithm with the help of following exam-
ple. Table 1 below shows a dependency table for a set of six tasks. The number indicates
the time unit at which the dependency of a particular task is resolved. This table rep-
resents an output of an offline dependency analysis on a sequential code in a simplified
form. Tij Task j can be started only after task i has finished T ij time of execution where
i is row number and j is column
Table 8.1: Dependency TableTask T0 T1 T2 T3 T4 T5 TxT0 0 100 200 150 0 0 250T1 0 0 50 150 0 200 300T2 0 0 0 50 100 150 200T3 0 0 0 0 50 99 100T4 0 0 0 0 0 150 200T5 0 0 0 0 0 0 0
Figure 8.1 shows the simulation output of the dynamic scheduler for dependen-
cies as given in Table 8.1.
We have assumed time unit in seconds. Column Tx gives the total execution
time for corresponding task. Each cell in the dependency table contains Tij for task j,
which means task j can be started only after task i has finished T ij units of execution.
For example, Task T2starts only after task T0finishes 200s and task T1finishes
50s of execution. So T02 = 200, T12 = 50.
At time t=0 all the cores will try to get lock of TDS and one of them (P1) gets
41
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
52/63
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
53/63
CHAPTER 8. EXPERIMENTAL RESULTS
concluded that tasks T4 and T5 will be scheduled for execution on processors P3 and P4
respectively.
Dynamic Scheduler for Multi-Core Processor 43 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
54/63
Chapter 9
Conclusion
The scheduling algorithm discussed attempts to increase the utilization of multi-
core processors. This algorithm is different in the sense that the processor owns the
responsibility of picking up tasks for execution whenever it is idle. This method gives
priority to tasks that resolve more dependencies and hence make sure that the updates
to the ready-to-execute tasks list are done accordingly. The scheduler resides on each
core as a separate thread or instance and hence is specific to a core, so we can conclude
that the proposed scheduler will be more efficient and will balance the load properly.
This project covered most of the important aspects of Linux scheduler. Kernel
scheduler is one of the most frequently executed components in Linux system. Hence, it
has gained a lot of attentions from kernel developers who have thrived to put the most
optimized algorithms and codes into the scheduler. Different algorithms used in kernel
scheduler were discussed in the project. Dynamic Multicore scheduler achieves a good
performance and responsiveness while being relatively simple compared with the previ-
ous algorithm like O(1). Dynamic Multicore exceeds performance expectation in some
workloads on multicore systems. But it still shows some weakness in other workloads.
There are some weakness about irresponsiveness of Dynamic Multicore scheduler in 3D
game area.
44
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
55/63
Chapter 10
Future Scope
The future work includes delving deeper in to the scheduling and process codes
in a way so that we can implement a new scheduling algorithm in the kernel. Though this
project gives a vivid overview and basic steps of configuring and compiling kernel, im-
plementing scheduling policy like the Dynamic Scheduling,SCHED IDLE (with a lower
priority) , there were some challenges associated with it. One of the challenges were in-
terpreting the change in the scheduling policy through the process runtime. The goal for
the future is to such challenges and develope efficient techniques for kernel scheduling.
In the current implemented Dynamic Scheduling policy we have consider about
the deadlock handling of the processes,our next goal is to improve this Dynamic Schedul-
ing policy by implementing the better deadlock handling policy
45
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
56/63
Appendix A
Appendix
Kernel Compilation
Compiling custom kernel has its own advantages and disadvantages. However, new
Linux user / admin find it difficult to compile Linux kernel. Compiling kernel needs to
understand few things and then just type couple of commands. This step by step how-to
covers compiling Linux kernel version 2.6.xx under Debian GNU Linux. However, an
instruction remains the same for any other distribution except for apt-get command.
Step # 1 Get Latest Linux kernel code
Visit http://kernel.org/ and download the latest source code. File name would
be linux-x.y.z.tar.bz2, where x.y.z is actual version number. For example file inux-
2.6.25.tar.bz2 represents 2.6.25 kernel version. Use wget command to download kernel
source code:
$ cd /tmp
$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-x.y.z.tar.bz2
Note: Replace x.y.z with actual version number.
Step # 2 Extract tar (.tar.bz3) file
Type the following command:
# tar -xjvf linux-2.6.25.tar.bz2 -C /usr/src
46
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
57/63
APPENDIX A. APPENDIX
# cd /usr/src
Step # 3 Configure kernel
Before you configure kernel make sure you have development tools (gcc compilers
and related tools) are installed on your system. If gcc compiler and tools are not installed
then use apt-get command under Debian Linux to install development tools.
# apt-get install gcc
Now you can start kernel configuration by typing any one of the command:
$ make menuconfig -
Text based color menus, radiolists & dialogs. This option also useful on remote server
if you wanna compile kernel remotely.
$ make xconfig -
X windows (Qt) based configuration tool, works best under KDE desktop
$ make gconfig -
X windows (Gtk) based configuration tool, works best under Gnome Dekstop.
For example make menuconfig command launches following screen:
$ make menuconfig -
You have to select different options as per your need. Each configuration option has
HELP button associated with it so select help button to get help.
Dynamic Scheduler for Multi-Core Processor 47 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
58/63
APPENDIX A. APPENDIX
Figure A.1: Menuconfig
Step # 4 Compile kernel
Start compiling to create a compressed kernel image, enter:
$ make
Start compiling to kernel modules:
$ make modules
Install kernel modules (become a root user, use su command):
$ su
# make modules_install
Step # 5 Install kernel
So far we have compiled kernel and installed kernel modules. It is time to install
kernel itself.
# make install
It will install three files into /boot directory as well as modification to your kernelgrub configuration file:
1.System.map-2.6.25
2.config-2.6.25
3.vmlinuz-2.6.25
Step # 6: Create an initrd image
Dynamic Scheduler for Multi-Core Processor 48 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
59/63
APPENDIX A. APPENDIX
Type the following command at a shell prompt:
# cd /boot
# mkinitrd -o initrd.img-2.6.25 2.6.25
initrd images contains device driver which needed to load rest of the operating
system later on. Not all computer requires initrd, but it is safe to create one.
Step # 7 Modify Grub configuration file - /boot/grub/menu.lst
Open file using vi
# vi /boot/grub/menu.lst
Figure A.2: menu.lst file
title Debian GNU/Linux, kernel 2.6.25 Default
root (hd0,0)
kernel /boot/vmlinuz root=/dev/hdb1 ro
initrd /boot/initrd.img-2.6.25
Dynamic Scheduler for Multi-Core Processor 49 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
60/63
APPENDIX A. APPENDIX
savedefault
boot
Remember to setup correct root=/dev/hdXX device. Save and close the file. Ifyou think editing and writing all lines by hand is too much for you, try out update-grub
command to update the lines for each kernel in /boot/grub/menu.lst file. Just type the
command:
# update-grub
Neat. Huh?
Step # 8 : Reboot computer and boot into your new kernel
Just issue reboot command:
# reboot
Dynamic Scheduler for Multi-Core Processor 50 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
61/63
References
[1] D. Tam, R. Azimi, and M. Stumm. Thread clustering: sharing-aware schedul-
ing on SMP-CMP-SMT multiprocessors. In Pro-ceedings of the 2nd ACM-
SIGOPS/EuroSys European Conference on Computer Systems, pages 47-58, New
York, NY, USA, 2007.ACM.
[2] F. Bellosa and M. Steckermeier. The performance implications of locality infor-
mation usage in shared-me mory multiprocessors. J. Parallel Distrib. Comput.,
37(1):113-121, 1996.
[3] M. C. Carlisle and A. Rogers. Software caching and computation migration in
Olden. In Proceedings of the 5th ACM SIGPLAN Symposium on Principles and
Practice of Parallel Programming, 1995.
[4] Jakub Kurzak and Jack Dongarra, Fully Dynamic Scheduler for Numerical
Computing on Multicore Processors, LAPACK Working Note 220, UT-CS-09-643,
June 4, 2009 .
[5] Balakrishnan, S., Rajwar, R., Upton, M., and Lai, K. K. (2005).The impact of
performance asymmetry in emerging multicore architectures. In 32st Inter-national
Symposium on Computer Architecture (ISCA 2005), 4-8 June 2005, Madison,
Wisconsin, USA, pages 506517. IEEE Computer Society.
51
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
62/63
REFERENCES REFERENCES
[6] K. Asanovic et al. The Landscape of Parallel Computing Research: A View
from Berkeley.Technical Report UCB/EECS-2006-183, University of California at
Berkeley, December 2006.
[7] K. Olukotun, L. Hammond, J. Laudon, Chip Multiprocessor Architecture:
Techniques to Improve Throughput and Latency, Synthesis Lectures on Computer
Architecture, Morgan and Claypool, 2007.
[8] [Lu et al., 1995] Lu, H., Dwarkadas, S., Cox, A. L., and Zwaenepoel, W. (1995).
Message passing versus distributed shared memory on networks of workstations. In
Supercomputing 95: Proceedings of the 1995 ACM/IEEE conference on Supercom-
puting (CDROM),page 37. ACM Press.
[9] http://www.kernel.org,
[10] http://www.ibiblio.org/pub/Linux/docs/HOWTO/KernelAnalysis-HOWTO,
[11] http://www.barrelfish.org,
[12] http://www.intel.com/core,
[13] http://www.multicoreinfo.com,
Dynamic Scheduler for Multi-Core Processor 52 VPCOE, Baramati
8/12/2019 Dynamic Scheduler for Multi-Core Processor_final Report _all 4 Names
63/63
REFERENCES REFERENCES