Database Theory
Jason Fan
Outline
• Basic Concepts– Database and Database Users (Chapter 1)– Database System Concepts and Architecture (Chapter 2)
• Database Design– Database Design Process (Chapter 16)– Entity-Relationship (ER) Modeling (Chapter3)– Functional Dependencies and Normalization for Relational Database (Chapter 14)– Relational Design Algorithms (Chapter 15)– Relational Data Model Mapping (Chapter 9)
• Relational Database– The Relational Data Model (Chapter 7)– Relational Algebra (Chapter 7)– SQL – A Relational Database Language (Chapter 8)– Relational Calculus (Chapter 9)
• Database Implementation– Transaction Processing (Chapter 19)– Concurrency Control (Chapter 20)– Database Recovery (Chapter 21)
• Advanced Topics
Chapter 1 Database and Database Users
Database and Database Users
• Basic Concepts
• Main Characteristics of Database Technology
• Classes of Database Users
• Additional Database Characteristics
• When not to use a DBMS
Chapter 1 Database and Database Users
Basic Concepts
• Database A collection of related data.
• DataKnown facts that can be recorded and have implicit meaning.
• Mini-world Some part of the real world about which data is stored in database.
• Database Management System (DBMS) A software package to facilitate the creation and maintenance of a computerized database.
• Database SystemThe DBMS software together with the data itself.
Chapter 1 Database and Database Users
Main Characteristics of Database Technology
• Self-contained nature of a database system A DBMS catalog stores the description (meta-data) of the database. This allows the DBMS software to work with different databases.
• Insulations between program and data –Data abstractions
A data model is used to hide storage details and present the user with a conceptual view of the database.
–Program-data independenceAllows changing data storage structures without having to change the DBMS access programs.
–Program-operation independenceAllows changing operation implementation without having to change the DBMS access programs.
• Support of multiple views of data
Chapter 1 Database and Database Users
Additional Characteristics of Database Technology
• Controlling data redundancy
• Restricting unauthorized access to data.
• Providing persistent storage for program objects and data structure.
• Providing multiple interfaces to different classes of users.
• Representing complex relationships among data.
• Enforcing integrity constraints on the database.
• Providing backup and recovery services.
• Potential for enforcing standards.
• Flexibility to change data structures.
• Reduced application development time.
• Availability of up-to-date information.
• Economies of scale.
Chapter 1 Database and Database Users
Classes of Database Users
• Workers on the scene : Persons whose job involves daily use of a large database.– Database administrators (DBAs): Responsible for managing the database system.– Database designers : Responsible for designing the database.– End users : Access the database for querying , updating , generating reports, etc.
• Casual end users : Occasional users.• Parametric (or naive) end users : They use pre-programmed canned transactions to interact
continuously with the database. For example, bank tellers or reservation clerks.• Sophisticated end users : Use full DBMS capabilities for implementing complex applications.• System Analysts/Application programmers : Design and implement canned transactions for
Parametric users.
• Workers behind the scene: Persons whose job involves design , development , operation, and maintenance of the DBMS software and system environment.
– DBMS designers and implementers : Design and implement the DBMS software package itself.
– Tool developers : Design and implement tools that facilitate the use of DBMS software. Tools include design tools , performance tools , special interfaces , etc.
– Operators and maintenance personnel : Work on running and maintaining the hardware and software environment for the database system.
Chapter 1 Database and Database Users
When not to Use a DBMS
• Main costs of using a DBMS– High initial investment and possible need for additional hardware.
– Overhead for providing generality, security, recovery, integrity, and concurrency control.
• When DBMS may be unnecessary:– If the database and applications are simple, well defined and not expected to
change.
– If there are stringent real-time requirements that may not be met because of the DBMS overhead
– If access to data by multiple users is not required.
Chapter 2 Database Concepts and Atchitecture
Database System Concepts and Architecture
• Data Models
• Three-Schema Architecture
• Data Independence
• DBMS Languages
• DBMS Interfaces
• DBMS Architecture
• Database System Utilities
• Classification of DBMS
Chapter 2 Database Concepts and Atchitecture
Data Models
• Data ModelA set of concepts to describe the structure of a database, and certain constraints
that the database should obey.
• Data Model OperationsOperations for specifying database retrievals and updates by referring to the
concepts of data model.
• Categories of data models– Conceptual (high-level, semantic) data models: Provide concepts that are
close to the way many users perceive data. (Also called entity-based or object-based data models)
– Physical (low-level, internal) data models: Provide concepts that describe the details of how data is stored in the computer.
– Implementation (record-oriented) data models: Provide concepts that fall between above two, balancing user views with some computer storage details.
Chapter 2 Database Concepts and Atchitecture
Data Models
• Database SchemaThe description of database. Includes description of database structure and the
constraints that should hold on the database.
• Database catalogStores database schema.
• Schema DiagramA diagrammatic display of ( some aspects of) a database schema.
• Database InstanceThe actual data stored in a database at a particular moment in time. Also called
database state (or occurrence)
• The database schema changes very infrequently. The database state changes every time the database is updated. Schema is also called intension, whereas the state is called extension.
Chapter 2 Database Concepts and Atchitecture
Three Schema Architecture
• Internal schema at the internal level to describe data storage structures and access paths. Typically uses a physical data model.
• Conceptual schema at the conceptual level to describe the structure and constraints for the whole database. Uses a conceptual or an implementation data model .
• External schemas at the external level to describe the various user views. Usually uses the same data model as the conceptual level.
• Mappings transform requests and results between levels.
Chapter 2 Database Concepts and Atchitecture
Database System Architecture
Stored Databases
Internal Schema
Conceptual Schema
External View External View
Internal Level
conceptual/internal Mapping
external/conceptual mapping
Conceptual Level
External Level
Chapter 2 Database Concepts and Atchitecture
Data Independence
• Logical Data IndependenceThe capacity to change the conceptual schema without having to change the
external schemas and their application programs.
• Physical Data IndependenceThe capacity to change the internal schema without having to change the
conceptual schema.
• When a schema at a lower level is changed, only the mappings between this schema and higher level schemas need to be changed in a DBMS that fully supports data independence.
• Mappings create overhead
Chapter 2 Database Concepts and Atchitecture
Database System Languages
• Data Definition Language (DDL)Used by the DBA and database designers to specify the conceptual schema of a
database. In many DBMSs, the DDL is also used to define internal and external schemas(views). In some DBMSs, separate storage definition language (SDL) and view definition language (VDL) are used to define internal and external schemas.
• Data Manipulation Language (DML)Used to specify database retrievals and updates.– High-level (nonprocedural) DML can be used on its own to specify database
operations.– Low-level (procedural) DML retrieves a record at a time and must be
embedded in a general-purpose programming language. – When DML is embedded in a general-purpose programming language (host
language), it is called data sublanguage.– When DML is used in a stand-alone interactive manner, it is called query
language
Chapter 2 Database Concepts and Atchitecture
DBMS Interfaces
• Stand-alone query language interfaces.• Programmer interfaces for embedding DML in programming languages:
– Pre-compiler Approach– Procedure (Subroutine) Call Approach
• Menu-based• Graphic-based• Forms-based• Natural language• Combination of above• Parametric interfaces using function keys• Report generation languages• Interfaces for DBA:
– Creating accounts, granting authorizations– Setting system parameters– Changing schemas or access path
Chapter 2 Database Concepts and Atchitecture
Database System Utilities
• Loading data stored in files into a database.
• Backing up the database periodically on tape.
• Reorganizing database file structures.
• Generating Report.
• Performance monitoring.
• Sorting files.
• User monitoring.
• Data compression.
Chapter 2 Database Concepts and Atchitecture
Classification of DBMSs
• Based on the data model used:– Relational
– Multidimensional
– Network
– Hierarchical.
– Object-oriented
– Semantic
– Entity-Relationship
• Other Classifications:– Single-user vs. multi-user
– Centralized vs. distributed
– Homogeneous vs. Heterogeneous
– OLTP vs. OLAP
Chapter 16 Practical Database Design and Tuning
Database Design
• Goals of Database Design– Satisfy the information content requirements of the specified users and
applications
– Provide natural and easy-to-understand structuring of information
– Support processing requirements and any performance objectives
• Database Design Process – Requirement collection and analysis
– Conceptual database design
– Choice of DBMS
– Data model mapping (Logical database design)
– Physical database design
– Database system implementation and tuning
Chapter 16 Practical Database Design and Tuning
Requirement Collection and Analysis
• The major application areas and user groups that will use the database or whose work will be affected by it are identified. Key individuals and committees within each group are chosen to carry out subsequent steps of requirement analysis
• Existing documentations concerning the applications is studied and analyzed.
• The current operating environment and planned use of the information is studied.
• Written responses to sets of questions are sometimes collected from potential users or user groups. Key individuals may be interviewed to help in assessing the worth of information and in setting up of priorities.
Chapter 16 Practical Database Design and Tuning
Conceptual Database Design
• Conceptual Schema Design– Choice of high-level conceptual data model such as ER model and dimensional
model
– Approaches to conceptual schema design• centralized schema design approach
• view integration approach
– Strategies for conceptual schema design• top-down strategy
• bottom-up strategy
• inside-out strategy
• mixed strategy
• Transaction Design
Chapter 16 Practical Database Design and Tuning
Physical Database Design
• Criteria for guiding the physical database design– Response time– Space utilization– Transaction throughput
• Physical database design in relational database– Factors that influent the physical database design
• Analyzing the database queries and transactions• Analyzing the expected frequencies of invocation of queries and transactions• Analyzing the time constraints for queries and transactions• Analyzing the expected frequencies of update operations• Analyzing the uniqueness constraints on attributes
• Physical database design decisions– Indexing– De-normalization– Storage design
Chapter 16 Practical Database Design and Tuning
Database Tuning in Relational Database
• Goals– Make application run fast
– lower the response time of queries and transactions
– improve the overall throughput of transactions
• Tuning indexes– Some queries may take too long for lack of an index
– Some indexes may not get utilized
– Some indexes may causing excessive overhead
• Tuning database design– De-normalization
– Table partition
– Duplicate attributes
• Tuning queries
Chapter 16 Practical Database Design and Tuning
Automated Design Tools
• Database Design Tools– Erwin
– Rational Rose
– Power Designer
• Schema Diagram Notation– UML (Unified Modeling Language)
– IDEF1X (Integration Definition for Information Modeling)
– IE (Information Engineering)
– CHEN's ERD Notation
Chapter 3 Data Modeling Using the Entity-Relationship Model
Entity-Relationship (ER) Modeling
• Example Database Application (COMPANY)
• ER Model Concepts– Entities and Attributes
– Entity Types, Value Sets, and Key Attributes
– Relationships and Relationship Types
– Structural Constraints and Roles
– Weak Entity Types
• ER Diagrams Notation
• Relationships of Higher Degree
• Enhanced ER Modeling
Chapter 3 Data Modeling Using the Entity-Relationship Model
Example of COMPANY Database
• Requirements for the COMPANY Database:– The company is organized into departments. Each department has a name,
number, and a employee who manages the department. We keep track of the start date of the department manager. A department may have several locations.
– Each department controls a number of projects. Each project has a name, number, and is located at a single location.
– We store each employee's social security number, address, salary, sex and birth date. Each employee works for one department but may work on several projects. We keep track of the number of hours per week that an employee currently works on each project. We also keep track of the direct supervisor of each employee.
– each employee may have a number of dependents. For each dependent, we keep their name, sex, birth date, and relationship to the employee.
Chapter 3 Data Modeling Using the Entity-Relationship Model
ER Model Concepts: Entities and Attributes
• Entities Entities are specific objects or things in the mini-world that are represented in the database;
for example, the EMPLOYEE John Smith, the Research DEPARTMENT, the ProductX PROJECT.
• AttributesAttributes are properties used to describe an entity; for example, an EMPLOYEE entity may
have a Name, SSN, Address, Sex, BirthDate. A specific entity will have a value for each of its attributes; for example a specific employee entity may have Name = 'John Smith', SSN = '123456789', Address = '731 Fondren , Houston, TX', Sex = 'M', BirthDate = '09-JAN-55'.
• Attribute Types– Simple: each entity has a single atomic value for the attribute; for example SSN or Sex.– Composite: Attribute may be composed of several components; for example
Name(FirstName, MiddleName, LastName). Composition may form a hierarchy where some components are themselves composite.
– Multi-Valued: An entity may have multiple values for that attribute; for example Color of a CAR or PreviousDegrees of a STUDENT. Denoted as {Color} or { PreviousDegrees}.
Chapter 3 Data Modeling Using the Entity-Relationship Model
ER Model Concept: Entity Types and Key Attributes
• Entity TypeEntity type defines a set of entities that have the same attributes. For example, the
EMPLOYEE entity type or the PROJECT entity type.
• Key AttributeAn attribute of an entity type for which each entity must have a unique value is
called a key attribute of the entity type. For example, SSN of EMPLOYEE.– A key attribute may be composite. For example, VehicleRegistrationNumber is a
key of the CAR entity type with components(Number, State).– An entity type may have more than one key. For example, the CAR entity type
may have two keys: VehicleIdentificationNumber and VehicleRegistrationNumber(Number, State).
• Domains (Value Sets) of AttributesEach simple attribute of an entity type is associates with a domain, which specifies
the set of values the may be assigned to that attribute for each individual entity.
Chapter 3 Data Modeling Using the Entity-Relationship Model
ER Model Concepts: Relationships and Relationship Types
• RelationshipA relationship relates two or more distinct entities with a specific meaning; for
example, EMPLOYEE John Smith works on the ProductX PROJECT or EMPLOYEE Franklin Wong manages the Research DEPARTMENT.
• Relationship TypeRelationship of the same type are grouped or typed into a relationship type. For
example, the WORKS_ON relationship type in which EMPLOYEEs and PROJECTs participate, or the MANAGEs relationship type in which EMPLOYEEs and DEPARTMENTs participate.
More than one relationship type can exist with the same participating entity types; for example, MANAGES and WORKS_FOR are distinct relationships between EMPLOYEE and DEPARTMENT participate.
• The degree of a relationship typeThe degree of a relationship type is the number of participating entity types.
binary relationships, ternary relationship, n-ary relationship
Chapter 3 Data Modeling Using the Entity-Relationship Model
ER Model Concepts: Structural Constraints and roles
• A relationship can relate two entities of the same entity type; for example, a SUPERVISION relationship type relates one EMPLOYEE ( in the role of supervisee) to another EMPLOYEE ( in the role of supervisor). This is called a recursive relationship type.
• A relationship type can have attributes; for example, HoursPerWeek of WORKS_ON; its value for each relationship instance describes the number of hours per week that an EMPLOYEE works on a PROJECT.
• Structural constraints on relationships– Cardinality ratio ( of a binary relationship): 1:1, 1:N, N:1, or M:N.
– Participation constraint (on each participating entity type): total (called existence dependency) or partial.
Chapter 3 Data Modeling Using the Entity-Relationship Model
ER Model Concepts: Weak Entity Types
• An entity type that does not have a key attribute
• A weak entity type must participate in an identifying relationship type with an owner or identifying entity type
• Entities are identified by the combination of a partial key of the weak entity type and the key of the identifying entity type.
• ExampleSuppose that a DEPENDENT entity is identified by the dependent’s first name
and birth date, and the specific EMPLOYEE that the dependent is related to. DEPENDENT is a weak entity type with EMPLOYEE as its identifying entity type via the identifying relationship type DEPENDENT_OF.
Chapter 3 Data Modeling Using the Entity-Relationship Model
Conceptual Design of COMPANY Database
• Entity types– DEPARTMENT
– PROJECT
– EMPLOYEE
– DEPENDENT
• Relationship types– Manage (1:1)
– Work_for (1:n)
– Supervision (1:n)
– Controls (1:n)
– Works_on (m:n)
– Has_dependent (1:n)
Chapter 3 Data Modeling Using the Entity-Relationship Model
ER Diagram of COMPANY Database
EMPLOYEEBdate
SSNName
Fname LnameMinit
Sex
Address
Name
DEPARTMENT
Name
Number
Location
PROJECT
Name Number
Location
DEPENDENT
Name Sex Bdate Relationship
Works_on
Hours
M N
Manages1 1
StartDate
Works_forN 1
Has_dependents
1
N
SupervisionN
1
Controls
1
N
Chapter 3 Data Modeling Using the Entity-Relationship Model
Alternative Notation for Relationship Structural Constraints
• Specified on each participation of an entity type E in a relationship type R.
• Specifies that each entity e in E participates in at least min and at most max relationship instances in R.
• Default(no constraint): min = 0, max = n.• Must have min max, min 0, max 1.• Examples
– A department has exactly one manager and an employee can manage at most one department.
• Specify (1,1) for participation of DEPARTMENT in MANAGES• Specify (0,1) for participation of EMPLOYEE in MANAGES
– An employee can work for exactly one department but a department can have any number of employees.
• Specify (1,1) for participation of EMPLOYEE in WORKS_FOR• Specify (0,n) for participation of DEPARTMENT in WORKS_FOR
Chapter 3 Data Modeling Using the Entity-Relationship Model
ER Diagram of COMPANY Database
EMPLOYEEBdate
SSNName
Fname LnameMinit
Sex
Address
Name
DEPARTMENT
Name
Number
Location
PROJECT
Name Number
Location
DEPENDENT
Name Sex Bdate Relationship
Works_on
Hours
M N
Manages0:1
1:1
StartDate
Works_for1:1
0:N
Has_dependents
1
N
SupervisionN
1
Controls
1
N
Chapter 4 Enhanced Entity-Relationship and Object Modeling
Enhanced Entity-Relationship and Object Modeling
• Subclass, Superclass and Inheritance
• Specialization and Generalization– Disjoin/Overlapping
– Total/Partial
• Union/Categories
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
The Relational Data Model
• Relational Model Concepts
• Characteristics of Relations
• Relational Integrity Constraints– Domain Constraints
– Key Constraints
– Entity Integrity Constraints
– Referential Integrity Constraints
• Update Operations on Relations
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Model Concepts
PRODUCT ProductID ProductName UnitPrice UnitInStock1 Chai 18.00 393 Aniseed Syrup 22.00 53
11 Queso Cabrales 21.00 22
ORDER_ITEM OrderID ProductID Quantity Discount UnitPrice1 1 20 0.11 3 15 0.12 1 30 0.22 11 10 0.23 11 35 0.15
20.0025.0018.0022.0021.00
AttributesRelation name
Tuples
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Model Concepts
• Relation ( informally).A table of values. Each column in the table has a column header called an attribute. Each row
is called a tuple.
• Formal relational concepts.– Domain: A set of atomic (indivisible) values.
– Attribute: A name to suggest the meaning that a domain plays in a particular relation. Each attribute Ai has a domain Dom(Ai).
– Relation schema: A relation name R and a set of attributes Ai that define the relation. Denoted by R(A1, A2, ... ,an). For example: student(name, SSN, BirthDate, Addr).
– Relational Database Schema: A set S of relation schemas that belong to the same database. S is the name of the database. S = {R1, R2, ...,Rn}.
– Degree of a relation: its number of attributes n.
– Tuple t of R(A1, A2,....,An): a (ordered) set of values t = < v1, v2, ..., vn> where each value vi is an element of Dom(Ai). Also called a n-tuple.
– Relation instance r(r): A set of tuples r(r) = {t1, t2,...,Tm}, or alternatively r(r) dom(a1) dom(a2) ... dom(an).
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Characteristics of Relations
• The tuples are not considered to be ordered, even though they appear to be in the tabular form.
• We will consider the attributes in R(A1, A2, ...,An) and the values in t = < v1, v2, .., vn> to be orderd.( However, a more general alternative definition of relation does not require this ordering).
• All values are considered atomic (indivisible). A special null value is used to represent values that are unknown or inapplicable to certain tuples.
• Notation– We refer to component values of a tuple t by t[Ai] = vi (the value of attribute Ai
for tuple t)
– Similarly, t[Au, Av, ..., Aw] refer to the sub-tuple of t containing the values of attributes Au, Av, ..., Aw, respectively.
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Constraints
PRODUCT ProductID ProductName UnitPrice UnitInStock
ORDER_ITEM OrderID ProductID Quantity Discount UnitPrice1 1 20 0.11 3 15 0.12 1 30 0.22 11 10 0.23 11 30 0.15
20.0025.0018.0022.0020.00
AttributesRelation name
1 Chai 18.00 393 Aniseed Syrup 22.00 53
11 Queso Cabrales 21.00 22Tuples
Primary key
Primary key Foreign key
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Constraints
• Constraints are conditions that must hold on all valid relation instances. There are three main types of constraints:
• Domain ConstraintsValues of each attribute must be atomic.
• Key Constraints– Superkey of R: A set of attributes SK of R such that no two tuples in any valid relation
instance r(R) will have the same value for SK. That is, for any distinct tuples t1 and t2 in r(R), t1[SK] t2[SK]
– Key (candidate key) of R: A “minimal” superkey; that is, a superkey K such that removal of any attribute form K results in a set of attributes that is not a superkey.
– Example: The CAR relation schema:CAR(State, Reg#, SerialNo, make, Model, Year) has two keys: Key1 = {State, Reg#}, Key2
{SerialNo}; which are also superkeys. {SerialNo, Make} is a superkey but not a key.
– If a relation has several candidate keys, one is chosen arbitrarily to be the primary key. The primary key attributes are underlined.
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Constraints
• Entity IntegrityThe primary key attributes PK of each relation schema R in S can not have null
values in any tuple of r(R). This is because primary key values are used to identify the individual tuples. t[PK] null for any tuple t in r(R).
• Referential IntegrityReferential integrity constraint is used to specify a relationship among tuples in
two relations: the referencing relation and referenced relation. It involves two relations. Tuples in the referencing relation R1 have attributes FK (called foreign key attributes) that reference the primary key attributes PK of the referenced relation R2. A tuple t1 in R1 is said to reference a tuple t2 in R2 if t1[FK] = t2 [PK]. A referential integrity constraint can be displayed in a relational database schema as a directed arc from R1.FK to R2.
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Operations
PRODUCT ProductID ProductName UnitPrice UnitInStock1 Chai 18.00 393 Aniseed Syrup 22.00 53
11 Queso Cabrales 21.00 22
ORDER_ITEM OrderID ProductID Quantity Discount UnitPrice1 1 20 0.11 3 15 0.12 1 30 0.22 11 10 0.2
20.0025.0018.0022.00
3 11 35 0.15 21.00
AttributesRelation name
Tuples
13 Syrup 23.00 2013 xyz 22.00, 21.00 25, 35
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Update Operations on Relations
• Update Operations– INSERT a tuple– DELETE a tuple– MODIFY a tuple
• Integrity constraints should not be violated by the update operations.– Insert operation could violate any constraint.– Delete operation could violate referential constraints.– Modify a primary key or foreign key attribute is equivalent to delete one tuple and
insert another. Modify other attributes cause no problems.• Several update operations may have to be grouped together.• Updates may propagate to cause other updates automatically. This may be
necessary to maintain integrity constraints.• In case of integrity violation, several actions can be taken:
– cancel the operation that causes the violation– perform the operation but inform the user of violation– trigger additional updates so the violation is corrected– execute a user-specified error-correction routine
Capter 9 ER- and EER-to-Relational Mapping, and Other Relational Languages
Data Model Mapping
• ER-to-Relational Mapping
• EER-to-Relational Mapping
Capter 9 ER- and EER-to-Relational Mapping, and Other Relational Languages
Relational Model of COMPANY Database
SUPERSSN DNO
MGRSTARTDATEMGRSSN
DNUM
DNUMBERDNAME
DEPARTMENT
SALARYSEXADDRESSBDATESSNLNAMEMINITFNAME
EMPLOYEE
DLOCATIONDNUMBER
DEPT_LOCATION
PLOCATIONPNUMBERPNAME
PROJECT
HOURSPNOESSN
WORKS_ON
SEX BDATE RELATIONSHIPDEPENDENT_NAMEESSN
DEPENDENT
Capter 9 ER- and EER-to-Relational Mapping, and Other Relational Languages
ER-to-Relational Mapping
• STEP 1: For each regular (strong) entity type E in the ER schema, create a relation R that includes all the simple attributes of E. Include only the simple component attributes of a composite attribute. Choose one of the key attributes of E as primary key for R. If the chosen key of E is composite, the set of simple attributes that form it will together form the primary key of R.
• STEP 2: For each weak entity type W in the ER schema with owner entity type E, create a relation R, and include all simple attributes (or simple components of composite attributes) of W as attributes of R. In addition, include as foreign key attributes of R the primary key attribute(s) of the relation(s) that correspond to the owner entity type(s); this takes care of the identifying relationship type of W. The primary key of R is the combination of the primary key(s) of the owner(s) and the partial key of the weak entity type W, if any.
• STEP 3: For each binary 1:1 relationship type R in the ER schema, identify the relations S and T that correspond to the entity types participating in R. Choose one of the relations—S, say—and include as foreign key in S the primary key of T. It is better to choose an entity type with total participation in R in the role of S. Include all the simple attributes (or simple components of composite attributes) of the 1:1 relationship type R as attributes of S.
• STEP 4: For each regular binary 1:N relationship type R, identify the relation S that represents the participating entity type at the N-side of the relationship type. Include as foreign key in S the primary key of the relation T that represents the other entity type participating in R. Include any simple attributes (or simple components of composite attributes) of the 1:N relationship type as attributes of S.
Capter 9 ER- and EER-to-Relational Mapping, and Other Relational Languages
ER-to-Relational Mapping
• STEP 5: For each binary M:N relationship type R, create a new relation S to represent R. Include as foreign key attributes in S the primary keys of the relations that represent the participating entity types; their combination will form the primary key of S. Also include any simple attributes of the M:N relationship type (or simple components of composite attributes) as attributes of S. Notice that we cannot represent an M:N relationship type by a single foreign key attribute in one of the participating relations—as we did for 1:1 or 1:N relationship types—because of the M:N cardinality ratio.
• STEP 6: For each multivalued attribute A, create a new relation R. This relation R will include an attribute corresponding to A, plus the primary key attribute K—as a foreign key in R—of the relation that represents the entity type or relationship type that has A as an attribute. The primary key of R is the combination of A and K. If the multivalued attribute is composite, we include its simple components.
• STEP 7: For each n-ary relationship type R, where n > 2, create a new relation S to represent R. Include as foreign key attributes in S the primary keys of the relations that represent the participating entity types. Also include any simple attributes of the n-ary relationship type (or simple components of composite attributes) as attributes of S. The primary key of S is usually a combination of all the foreign keys that reference the relations representing the participating entity types. However, if the cardinality constraints on any of the entity types E participating in R is 1, then the primary key of S should not include the foreign key attribute that references the relation E’ corresponding to E. This concludes the mapping procedure.
Capter 9 ER- and EER-to-Relational Mapping, and Other Relational Languages
EER-to-Relational Mapping
• STEP 8: Convert each specialization with m subclasses {S1, S2, . . ., Sm} and (generalized) superclass C, where the attributes of C are {k, a1, . . ., an} and k is the (primary) key, into relation schemas using one of the four following options:
– Option 8A: Create a relation L for C with attributes Attrs(L) = {k, a1, . . ., an} and PK(L) = k. Create a relation Li for each subclass Si, 1 1 i 1 m, with the attributes Attrs(Li) = {k}D {attributes of Si} and PK(Li) = k.
– Option 8B: Create a relation Li for each subclass Si, 1 1 i 1 m, with the attributes Attrs(Li) = {attributes of Si}D {k, a1, . . ., an} and PK(Li) = k.
– Option 8C: Create a single relation L with attributes Attrs(L) = {k, a1, . . ., an} D {attributes of S1} D . . . D {attributes of Sm} D {t} and PK(L) = k. This option is for a specialization whose subclasses are disjoint, and t is a type (or discriminating) attribute that indicates the subclass to which each tuple belongs, if any. This option has the potential for generating a large number of null values.
– Option 8D: Create a single relation schema L with attributes Attrs(L) = {k, a1, . . ., an} D {attributes of S1} D . . . D {attributes of Sm} D {t1, t2, . . ., tm} and PK(L) = k. This option is for a specialization whose subclasses are overlapping (but will also work for a disjoint specialization), and each ti, 1 1 i 1 m, is a Boolean attribute indicating whether a tuple belongs to subclass Si.
Capter 9 ER- and EER-to-Relational Mapping, and Other Relational Languages
ER-to-Relational Mapping
ER Model Relational Model
Entity Entity Relation
1:1 and 1:N relationship type Foreign key
M:N relationship type Relationship relation and two foreign keys
N-ary relationship type Relationship relation and n foreign keys
Simple attribute Attribute
Composite attribute Set of component attributes
Multi-valued attributes Relation and foreign keys
Value set (Domain) Domain
Key attribute Primary (or candidate) key
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
The Relational Algebra
• Relational algebra is a collection of operations to manipulate relations• Query result is in the form of a relation• Relational Operations
– SELECT – PROJECT operations– Sequences of operations and renaming of attributes– Set operations
• UNION • INTERSECTION • DIFFERENCE • CARTESIAN PRODUCT
– JOIN operations– Other relational operations
• DIVISION• OUTER JOIN• AGGREGATE FUNCTIONS.
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• SELECT operation (denoted by )
– Selects the tuples (rows) from a relation R that satisfy a certain selection condition c
– Form of the operation: c(R)
– The condition c is an arbitrary Boolean expression on the attributes of R
– Resulting relation has the same attributes as R
– Resulting relation includes each tuple in r(R) whose attribute values satisfy condition c
– Examples: DNO = 4 (EMPLOYEE)
SALARY > 30333(EMPLOYEE)
(( DNO = 4 AND SALARY > 25000 ) OR DNO = 5) (EMPLOYEE)
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
– Examples:
DNO = 4 (EMPLOYEE)Jennifer S Wallace 987654321 1941-06-20 Berry, Bellaire, TX F 43000.00 888665555 4
Ahmad V Jabbar 987987987 1969-03-29 Dallas, Huston, TX M 25000.00 987654321 4
Alicia J Zelaya 999887777 1968-07-19 Castle, Spring, TX F 25000.00 987654321 4
SALARY > 30333(EMPLOYEE)
Franklin T Wong 333445555 1955-12-08 638 Voss, Huston, TX M 40000.00 888665555 5
Ramesh K Narayan 666884444 1962-09-15 975 Fire Oak, Humble, TX M 38000.00 333445555 5
James E Borg 888665555 1937-11-10 450 Stone, Huston, TX M 55000.00 null 1
Jennifer S Wallace 987654321 1941-06-20 291 Berry, Bellaire, TX F 43000.00 888665555 4
(( DNO = 4 AND SALARY > 25000 ) OR DNO = 5) (EMPLOYEE)
Franklin T Wong 333445555 1955-12-08 638 Voss, Huston, TX M 40000.00 888665555 5
Ramesh K Narayan 666884444 1962-09-15 975 Fire Oak, Humble, TX M 38000.00 333445555 5
Jennifer S Wallace 987654321 1941-06-20 291 Berry, Bellaire, TX F 43000.00 888665555 4
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• PROJECT operation(denoted by )– Keeps only certain attributes (columns) from a relation R specified in
an attribute list L
– Form of operation: L(R)
– Resulting relation has only those attributes of R specified in L
– The PROJECT operation eliminates duplicate tuples in the resulting relation so that it remains a mathematical set ( no duplicate elements)
– Example FNAME,LNAME, SALARY(EMPLOYEE)
SEX, SALARY(EMPLOYEE)
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
– Example
FNAME,LNAME, SALARY(EMPLOYEE)John Smith 30000.00Franklin Wong 40000.00Joice English 25000.00Ramesh Narayan 38000.00James Borg 55000.00Jennifer Wallace 43000.00Ahmad Jabbar 25000.00Alicia Zelaya 25000.00
SEX, SALARY(EMPLOYEE)
F 25000.00F 43000.00M 25000.00M 30000.00M 38000.00M 40000.00M 55000.00
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• Sequence of operations:Several operations can be combined to form a relational algebra expression (query)
• Example– Retrieve the names and salaries of employees who work in department 4.
FNAME, LNAME, SALARY( DNO=4 (EMPLOYEE))
Jennifer Wallace 43000.00Ahmad Jabbar 25000.00Alicia Zelaya 25000.00
– Alternatively we specify explicit intermediate relations for each step:DEPT4_EMPS DNO= 4 (EMPLOYEE)
R FNAME, LNAME, SALLRY(DEPT4_EMPS)
• Attributes can optionally be renamed in the resulting left-hand side relation(this may be required for some operations that will be presented later):
DEPT4_MPS DNO =4 (EMPLOYEE)R(FIRSTNAME,LASTNAME,SALARY)
FNAME,LNAME,SALARY(DEPT4_EMPS)
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• Set Operations– UNION: R1 R2,
– INTERSECTION: R1 R2
– SET DIFFERENCE: R1 R2
– CARTESIAN PRODUCT: R1 R2
– For , , , the operand relations R1(A1, A2, ...,An) and R2(B1, B2, ...,Bn) must have the same number of attributes, and the domains of corresponding attributes must be compatible; that is dom(Ai) = dom(Bi) for i = 1,2,..,n. This condition is called union compatibility.
– The resulting relation for , or has the same attribute names as the first operand relation R1(by convention).
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• Cartesian product– R(A1, A2,....,Am,B1,B2,...,Bn) R1(A1, A2,....,Am) R2(B1,B2,...,Bn)
– A tuple t exists in R for each combination of tuples t1 from R1 and t2 from R2 such that t[A1, A2,....,Am ] = t1 and t[B1,B2,...,Bn] = t2
– If R1 has n1 tuples and R2 has n2 tuples, then R will have n1*n2 tuples
– CARTESIAN PRODUCT is a meaningless operation on its own. It can combine related tuples from two relations if followed y the appropriate SELECT operation.
– Example: Combine each DEPARTMENT tuple with the EMPLOYEE tuple of the manager.
DEP_EMP DEPARTMENT EMPLOYEE
DEPT_MANAGER MGRSSN = SSN(DEP_EMP)
James E Borg 888665555 … Headquarters 1 888665555 1981-06-19
Jennifer S Wallace 987654321 … Administration 4 987654321 1995-01-01
Franklin T Wong 333445555 … Research 5 333445555 1988-05-22
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• JOIN operation– THETA JOIN
Similar to CARTESIAN PRODUCT followed by a SELECT. The condition c is called the join condition.
R(A1, A2,....,Am,B1,B2,...,Bn) R1(A1, A2,....,Am) c R2(B1,B2,...,Bn)
c is in the form of <condition> AND <condition> AND . . . AND <condition> , where each condition is of the form Ai θ Bj, Ai is an attribute of R, Bj is an attribute of S, Ai and Bj have the same domain, and θ (theta) is one of the comparison operators {=, <, , >, , }.
– EQUIJOIN• The condition c uses only operator '='.
• The attributes appear in condition c are called join attributes
Examples: Retrieve each DEPARTMENT’s name and its manager’s name:T DEPARTMENT MGRSSN = SSN EMPLOYEE
RESULT DNAME,FNAME,LNAME (T)
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• JOIN operations– NATURAL JOIN(*):
In an EQUIJOIN R R1 cR2, the join attributes of R2 appear redundantly in the result relation R. In a NATURAL JOIN, the redundant join attributes of R2 are eliminated from R. The equality condition is implied and need not be specified.
R R1 * ( join attributes of R1), (join attributes of R2) R2
– If the join attributes have the same names in both relations, they need not be specified and we can write R R1*R2.
– Examples: • Retrieve each EMPLOYEE’s name and the name of the DEPARTMENT he/she
works for:T EMPLOYEE* (DNO),(DNUMBER)DEPARTMENT
RESULT FNAME, LNAME, DNAME (T)
• retrieve each EMPLOYEE’s name and the name of his/ser SUPERVISOR:SUPERVISOR (SUPERSSN,SFN,SLN) SSN,FNAME,LNAME (EMPLOYEE)T EMPLOYEE*SUPERVISOR
RESULT FNAME,LNAME,SFN,SLN(T)
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• Complete Set of Relational Algebra Operations:– All the operations discussed so far can be described as a sequence of only the
operations SELECT, PROJECT, UNION, SET DIFFERENCE, and CARTESIAN PRODUCT.
– Hence, the set {, , , -, } is called a complete set of relational algebra operations. Any query language equivalent to these operations is called relationally complete.
– For database applications, additional operations are needed that were not part of the original relational algebra. These include:
• Aggregate functions and grouping
• OUTER JOIN and OUTER UNION.
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• Aggregate Functions– Functions such as SUM, COUNT, AVERAGE, MIN, MAX are often applied
to sets of values or sets of tuples in database applications–
< grouping attributes> <function list> (R)– The grouping attributes are optional– Example 1: retrieve the average salary of all employees ( no grouping needed):
R(AVGSAL) AVERAGE SALARY (EMPLOYEE)35125.000000
– Example 2: For each department, retrieve the department number, the number of employees , and the average salary ( in the department):
R(DNO,NUMEMPS,AVGSAL)
DNO COUNT SSN,AVERAGE SALARY (EMPLOYEE)1 1 55000.0000004 3 31000.0000005 4 33250.000000
Chapter 7 The Relational Data Model, Relational Constraints, and the Relational Algebra
Relational Operations
• OUTER JOIN:– In a regular EQUIJOIN or NATURAL JOIN operation, tuples in R1 or R2 that
do not have matching tuples in other relation do not appear in the result. Some queries require all tuples in R1 (or R2 or both) to appear in the result. When no matching tuples are found, nulls are placed for the missing attributes
– LEFT OUTER JOIN: R1 R2 lets every tuple in R1 appear in the result
– RIGHT OUTER JOIN: R1 R2 lets every tuple in R2 appear in the result
– FULL OUTER JOIN: R1 R2 lets every tuple in R1 or R2 appear in the result
Chapter 8 SQL - The Relational Database Standard
SQL - A Relational Database Language
• Basic Concepts
• Data Definition in SQL
• Retrieval Queries in SQL
• Specifying Updates in SQL
• Relational Views in SQL
• Creating Indexes in SQL
• Embedding SQL in a Programming Language
• Recent Advances in SQL
Chapter 8 SQL - The Relational Database Standard
Basic Concept
• Catalog: A collection of schemas
• Schema: A collections of tables and other constructs such as constraints.
• Table: Represents a relation. It includes base tables and views.
• Column: Represents an attribute.
• Name Space Hierarchy: catalog -> schema -> table -> column
• Qualified Namecatalog_name[.schema_name[.table_name[.column_name]]]
Chapter 8 SQL - The Relational Database Standard
Data Definition in SQL
• CREATE TABLE :Specifies a new base relation by giving it a name and specifying each of its
attributes and their data types (INTEGER, FLOAT , DECIMAL (i,j), CHAR(n), VARCHAR(n)). A constraint NOT NULL may be specified on an attribute .
• Example :CREATE TABLE DEPARTMENT ( DNAME VARCHAR(15) NOT NULL , DNUMBER INT NOT NULL ENIQUE, MGRSSN CHAR(9) NOT NULL, MGRSTARTDATE DATETIME, PRIMARY KEY(DNUMBER), FOREIGN KEY (MGRSSN) REFERENCES EMPLOYEE);
Chapter 8 SQL - The Relational Database Standard
Data Definition in SQL
• DROP TABLEUsed to remove a relation (base table) and its definition. The relation can no
longer be used in queries , updates or any other commands since its description no longer exists.
Example : DROP TABLE DEPENDENT ;
• ALTER TABLE Used to add an attribute to one of the base relations. The new attribute will have
NULLs in all the tuples of the relation right after the command is executed ; hence, the NOT NULL constraint is not allowed for such an attribute.
Example :ALTER TABLE EMPLOYEE ADD JOB VARCHAR(12) ;
– The database users must still enter a value for the new attribute JOB for each EMPLOYEE tuple. This can be done using the UPDATE command.
Chapter 8 SQL - The Relational Database Standard
DDL for COMPANY Database
CREATE TABLE EMPLOYEE
( FNAME VARCHAR(15) NOT NULL,
MINIT CHAR,
LNAME VARCHAR(15) NOT NULL,
SSN CHAR(9) NOT NULL,
BDATE DATETIME,
ADDRESS VARCHAR(30),
SEX CHAR,
SALARY DECIMAL(19,2),
SUPERSSN CHAR(9),
DNO INT NOT NULL,
PRIMARY KEY(SSN),
FOREIGN KEY (SUPERSSN) REFERENCES EMPLOYEE(SSN),
);
Chapter 8 SQL - The Relational Database Standard
DDL for COMPANY Database
CREATE TABLE DEPARTMENT
( DNAME VARCHAR(15) NOT NULL ,
DNUMBER INT NOT NULL UNIQUE,
MGRSSN CHAR(9) NOT NULL,
MGRSTARTDATE DATETIME,
PRIMARY KEY(DNUMBER),
FOREIGN KEY (MGRSSN) REFERENCES EMPLOYEE
);
ALTER TABLE EMPLOYEE
ADD FOREIGN KEY (DNO)
REFERENCES DEPARTMENT(DNUMBER);
Chapter 8 SQL - The Relational Database Standard
DDL for COMPANY Database
CREATE TABLE DEPT_LOCATIONS
(DNUMBER INT NOT NULL,
DLOCATION VARCHAR(15) NOT NULL,
PRIMARY KEY (DNUMBER, DLOCATION),
FOREIGN KEY (DNUMBER) REFERENCES DEPARTMENT(DNUMBER)
);
CREATE TABLE PROJECT
(PNAME VARCHAR(15) NOT NULL,
PNUMBER INT NOT NULL,
PLOCATION VARCHAR(15),
DNUM INT NOT NULL,
PRIMARY KEY (PNUMBER),
FOREIGN KEY (DNUM) REFERENCES DEPARTMENT(DNUMBER)
);
Chapter 8 SQL - The Relational Database Standard
DDL for COMPANY Database
CREATE TABLE WORKS_ON(ESSN CHAR(9) NOT NULL, PNO INT NOT NULL, HOURS DECIMAL(3,1) NOT NULL, PRIMARY KEY (ESSN, PNO), FOREIGN KEY (ESSN) REFERENCES EMPLOYEE(SSN), FOREIGN KEY (PNO) REFERENCES PROJECT(PNUMBER));CREATE TABLE DEPENDENT(ESSN CHAR(9) NOT NULL, DEPENDENT_NAME VARCHAR(15) NOT NULL, SEX CHAR, BDATE DATETIME, RELATIONSHIP VARCHAR(8), PRIMARY KEY (ESSN,DEPENDENT_NAME), FOREIGN KEY (ESSN) REFERENCES EMPLOYEE(SSN));
Chapter 8 SQL - The Relational Database Standard
Basic Queries in SQL
• SQL has one basic statement for retrieving information from a database; the SELECT statement
• This is not the same as the SELECT operation of the relational algebra
• Important distinction between SQL and the formal relational model: SQL allows a table (relation) to have two or more tuples that are identical in all their attribute values.
• SQL relations can be constrained to be sets by a key constraint, or by using the DISTINCT option in the SELECT statement.
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement
• Basic form of the SQL SELECT statement is called a mapping or a SELECT-FROM-WHERE block
SELECT <attribute-list>
FROM <table list>
WHERE <condition>
where <attribute list> is a list of attribute names whose values are to be retrieved by the query. < table list> is al list of the relation names required to process the query. < condition> is a conditional (Boolean) expression that identifies the tuples to be retrieved by the query
• Basic SQL queries correspond to using the SELECT, PROJECT, and JOIN operations of the relational algebra.
Chapter 8 SQL - The Relational Database Standard
Sample Basic Queries
• Query 0: Retrieve the birth date and address of the employee whose name is 'John B. Smith'.Q0: SELECT BDATE, ADDRESS
FROM EMPLOYEEWHERE FNAME = 'John' AND MINIT = 'B' AND LNAME = 'Smith'
• Query 1: Retrieve the name and address of all employees who work for the 'Research' department.
Q1: SELECT FNAME, LNAME, ADDRESSFROM EMPLOYEE, DEPARTMENTWHERE DNAME='Research' AND DNUMBER = DNO
• Query 2: For every project located in 'Stafford' , list the project number, the controlling department number, and the department manager's last name, address and birth date.
Q2: SELECT PNUMBER, DNUM,LNAME, BDATE, ADDRESSFROM PROJECT, DEPARTMENT, EMPLOYEEWHERE DNUM=DNUMBER AND MGRSSN = SSN AND
PLOCATION = 'Stafford'Q2x: SELECT PNUMBER, DNUM,LNAME, BDATE, ADDRESS
FROM DEPARTMENT JOIN PROJECT ON (DNUM=DNUMBER) JOIN EMPLOYEE ON (MGRSSN = SSN)
WHERE PLOCATION = 'Stafford'
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Aliases
• ALIASES:Some queries need to refer to the same relation twice. In this case, aliases are given to the relation nameQuery 8: For each employee, retrieve the employee’s name, and the name of his/her immediate supervisor.Q8: SELECT E.FNAME, E.LNAME, S.FNAME, S.LNAME
FROM EMPLOYEE E, EMPLOYEE SWHERE E.SUPERSSN = S.SSN
• Renaming attributesQ8A: SELECT E.FNAME as "Employee First Name", E.LNAME as "Employee Last Name", S.FNAME as "Supervisor First Name" , S.LNAME as "Supervisor Last Name"
FROM EMPLOYEE E, EMPLOYEE SWHERE E.SUPERSSN = S.SSN
Q8B (SQL Server): SELECT E.LNAME + ', ' + E.FNAME as "Employee Name", S.LNAME + ', ' + S.FNAME as "Supervisor Name"
FROM EMPLOYEE E, EMPLOYEE S WHERE E.SUPERSSN = S.SSN
• In Q8, the alternate relation names E and S are called aliases for the EMPLOYEE relation• We can think of E and S as two different copies of the EMPLOYEE relation; E represents
employee in the role of supervisees and S represents employees in the role of supervisors• Aliasing can also be used in any SQL query for convenience
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Unspecified WHERE-clause
• Unspecified WHERE-clause:– A missing WHERE-clause indicates no condition; hence, all tuples of the
relations in the FROM-clause are selected. This is equivalent to the condition WHERE TRUE
Query 9: Retrieve the ssn values of all employees.
Q9: SELECT SSN
FROM EMPLOYEE
– If more than one relation is specified in the FROM-clause and there is no join condition, then the CARTESIAN PRODUCT of tuples is selected
Q10: SELECT SSN, DNAME
FROM EMPLOYEE, DEPARTNEMT
– It is extremely important not to overlook specifying any selection and join conditions in the WHERE-clause; otherwise, incorrect and very large relations may result
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: DISTINCT and *
• Use of *:To retrieve all the attribute values of the selected tuples, a * is used, which stands for all the
attributes.
Q1C: SELECT *FROM EMPLOYEE
WHERE DNO = 5
Q1D: SELECT *FROM EMPLOYEE , DEPARTMENT
WHERE DNAME = 'Research' AND DNO = DNUMBER
• Tables as SetSQL does not treat a relation as a set; duplicate tuples can appear. To eliminate duplicate
tuples, the keyword DISTINCT is used.
Q11: SELECT SALARYFROM EMPLOYEE
Q11A: SELECT DISTINCT SALARYFROM EMPLOYEE
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Set Operations
• Set Operations:– SQL has directly incorporated some set operations. There is a union operation
(UNION), and in some versions of SQL there are set difference (MINUS) and intersection (INTERSECT) operations
– The resulting relations of these set operations are sets of tuples; duplicate tuples are eliminated from tuples
– The set operations apply only to union compatible relations; the two relations must have the same attributes and the attributes must appear in the same order
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Set Operations
• Set Operations:– Example
Query 4: Make a list of all project numbers for projects that involve an employee whose last name is 'Smith' as a worker or as a manager of the department that controls the project
Q4: (SELECT DISTINCT PNAME FROM PROJECT, DEPARTMENT, EMPLOYEE
WHERE DNUM=DNUMBER AND MGRSSN = SSN AND LNAME = 'Smith')
UNION
(SELECT PNAME
FROM PROJECT, WORKS_ON, EMPLOYEE
WHERE PNUMBER = PNO AND ESSN = SSN AND
LNAME = 'Smith')
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Substring Comparision
• Substring Comparison:– The LIKE comparison operator is used to compare partial strings– Two reserved characters are used : '%' ( or * in some implementations) replaces
an arbitrary number of characters, and '_' replaces a single arbitrary character– Query 12: Retrieve all employees whose address is in 'Houston, Texas'. Here, the
value of the ADDRESS attribute must contain substring 'Houston, TX'Q12: SELECT FNAME, LNAME
FROM EMPLOYEEWHERE ADDRESS LIKE '% Houston,TX%'
– Query 12A: Retrieve all employees who were born during 1950s. Here, '5' must be the 8th character of the string ( according to our format for date), so that the BDATE value is '_______5_', with each underscore as a place holder for a single arbitrary character
Q12A: SELECT FNAME, LNAMEFROM EMPLOYEEWHERE BDATE LIKE '_______5_'
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Arithmetic Operators
• Arithmetic Operators:– The standard arithmetic operators '+', '-', '*', and '/' (for addition, subtraction,
multiplication, and division, respectively) can be applied to numeric values in an SQL query result
Query 13: Show the effect of giving all employees who work on the 'ProductX' project a 10% raise.
Q13: SELECT FNAME, LNAME, 1.1*SALARY
FROM EMPLOYEE, WORKS_ON, PROJECT
WHERE SSN = ESSN AND PNO = PNUMBER AND PNAME = 'ProductX'
Query 14: Retrieve all employees in department 5 whose salary is between $30,000 and $40,000.
Q14: SELECT *
FROM EMPLOYEE
WHERE (SALARY BETWEEN 30000 AND 40000) AND DNO = 5;
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: ORDER BY
• ORDER BY:– The ORDER BY clause is used to sort the tuples in a query result based on the
values of some attributes.
– The default order is in ascending order of values
– We can specify the keyword DESC if we want a descending order; the keyword ASC can be used to explicitly specify ascending order, even though it is default
– ExampleQuery 15: Retrieve a list of employees and the project each works on , ordered by
employee's department and within each department ordered alphabetically by employee last name
Q15: SELECT DNAME, LNAME, FNAME, PNAMEFROM DEPARTMENT, EMPLOYEE, WORKS_ON, PROJECT
WHERE DNUMBER = DNO AND SSN = ESSN AND PNO = PNUMBER
ORDER BY DNAME, LNAME
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Nesting of Queries
• Nesting of Queries– A complete SELECT query, called a nested query , can be specified within the
WHERE-clause of another query, called the outer queryQuery 1: Retrieve the name and the address of all employees who work for the
'Research' department.
Q1a: SELECT FNAME, LNAME, ADDRESS
FROM EMPLOYEE
WHERE DNO IN ( SELECT DNUMBER
FROM DEPARTMENT
WHERE DNAME='Research')
– In general, we can have several levels of nested queries
– A reference to an unqualified attribute refers to the relation declared in the innermost nested query
– Only the first level select statement can have order by clause
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Correlated Nested Queries
• Correlated nested queries:– If a condition in the WHERE-clause of a nested query references an attribute of a relation
declared in the outer query, the two queries are said to be correlated– The result of a correlated nested query is different for each tuple ( or combination of tuples)
of the relation(s) the outer queryQuery 16: Retrieve the name of each employee who has dependent with the same first name as the
employee.Q16: SELECT DISTINCT E.FNAME, E.LNAME FROM EMPLOYEE E WHERE E.SSN IN ( SELECT ESSN FROM DEENDENT WHERE ESSN = E.SSN AND E.FNAME = DEPENDENT_NAME)
– A query written with nested SELECT...FROM ...WHERE ... blocks and using the = or IN comparison operators can always be expressed as a single block query. For example, Q4a may be written as
Q16A: SELECT DISTINCT E.FNAME, E.LNAME FORM EMPLOYEE E, DEPENDENT D WHERE E.SSN = D.ESSN AND E.FNAME = D.DEPENDENT_NAME
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: The Exists Function
• The Exists Function:– EXISTS used to check whether the result of a correlated nested query is empty (contains no
tuples ) Query 16: Retrieve the name of each employee who has a dependent with the same first name as the
employee
Q16B: SELECT FNAME, LNAME
FROM EMPLOYEE AS e
WHERE EXISTS ( SELECT *
FROM DEPENDENT
WHERE e.SSN = ESSN AND e.FNAME = DEPENDENT_NAME)
Query 6: Retrieve the names of employees who have no dependents
Q6: SELECT FNAME, LNAME
FROM EMPLOYEE
WHERE NOT EXISTS (SELECT *
FROM DEPENDENT
WHERE SSN = ESSN )
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Explicit Sets
• Explicit Sets– It is also possible to use an explicit set of values in the WHERE-clause rather then a nested
query.Query 17: Retrieve the social security numbers of all employees who work on project number 1, 2 or
3Q17: SELECT DISTINCT ESSN
FROM WORKS_ONWHERE PNO IN (1, 2, 3)
• NULLs in SQL Queries– SQL allows queries that check if a value is NULL ( missing or undefined or not applicable)– SQL uses IS or IS NOT to compare NULLs because it considers each NULL value distinct
form other NULL values, so equality comparison is not appropriateQuery 18: Retrieve the names of all employees who do not have supervisorsQ18: SELECT FNAME, LNAME
FROM EMPLOYEEWHERE SUPERSSN IS NULL
Note: If a join condition is specified, tuples with NULL values for the join attributes are not included in the result
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Aggregate Functions
• Aggregate Functions: COUNT, SUM, MAX, MIN and AVGQuery 19: Find the maximum salary, the minimum salary, and the average salaries among all employees.Q19: SELECT MAX (SALARY), MIN(SALARY), AVG(SALARY)
FROM EMPLOYEEQuery 20: Find the maximum salary, the minimum salary, and the average salaries among employees who
work for the 'Research' department.Q20: SELECT MAX(SALARY). MIN(SALARY), AVG(SALARY)
FROM EMPLOYEE, DEPARTMENTWHERE DNO = DNUMBER AND DNAME = 'Research'
Queries 21 and 22: Retrieve the total number of employees in the company (Q21), and the number of employees in the 'Research' department(Q22)
Q21: SELECT COUNT (*)FROM EMPLOYEE
Q22: SELECT COUNT (*)FROM EMPLOYEE , DEPARTMNETWHERE DNO = DNUMBER AND DNAME = 'Research'
Query 5: Retrieve the names of all employees who have two or more dependents Q5: SELECT LNAME, FNAME FROM EMPLOYEE WHERE (SELECT COUNT (*) FROM DEPENDENT WHERE SSN=ESSN) >= 2;
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: Grouping
• Grouping:– In many cases, we want to apply the aggregate functions to subgroups of tuples in a relation– Each subgroup of tuples consists of the set of tuples that have the same value for grouping
attribute(s)– The function is applied to each subgroup independently– SQL has a GROUP BY-clause for specifying the grouping attributes, which must also
appear in the SELECT-clauseQuery 24: For each department, retrieve the department number, the number of employees in the
department and thier average salaryQ24: SELECT DNO, COUNT(*), AVG(SALARY)
FROM EMPLOYEEGROUP BY DNO
– A join condition can be used in conjunction with groupingQuery 25: For each project, retrieve the project number, project name, and the number of
employees who work on that projectQ25: SELECT PNUMBER, PNAME, COUNT (*)
FROM PROJECT, WORKS_ONWHERE PNUMBER = PNOGROUP BY PNUMBER, PNAME
Chapter 8 SQL - The Relational Database Standard
The SELECT Statement: The Having Clause
• The Having-clause:– Sometimes we want to retrieve the values of these functions for only those groups that
satisfy certain conditions– The HAVING-clause is used for specifying a selection condition on groups (rather than on
individual tuples)– Example
Query 26: For each project on which more than two employees work, retrieve the project number, project name, and the number of employees who work on that project
Q26: SELECT PNUMBER, PNAME, COUNT(*) FROM PROJECT, WORKS_ON WHERE PNUMBER = PNO GROUP BY PNUMBER, PNAME HAVING COUNT (*) > 2Query 28: For each department that has more than five employees, retrieve the department number
and the number of its employees who are making more than $40,000. Q28: SELECT DNUMBER, COUNT (*) FROM DEPARTMENT, EMPLOYEE WHERE DNUMBER=DNO AND SALARY>40000 AND DNO IN (SELECT DNO FROM EMPLOYEE GROUP BY DNO HAVING COUNT (*) > 5) GROUP BY DNUMBER;
Chapter 8 SQL - The Relational Database Standard
Summary of SQL Queries
• A query in SQL can consist of up to six clauses, but only the first two, SELECT and FROM , are mandatory. The clauses are specified in the following order:
SELECT < attribute list>FROM < table list>[WHERE <condition>][GROUP BY < grouping attribute(s)>][ HAVING < group condition>][ORDER BY < attribute list>]
• The SELECT-clause lists the attributes or functions to be retrieved • The FROM-clause specifies all relations(or aliases) needed in the query but not those needed in
the nested queries• The WHERE-clause specifies the conditions for selection and join of tuples from the relations
specified in the FROM-clause• GROUP BY specifies grouping attributes• HAVING specifies a condition for selection of groups• ORDER BY specifies an order for displaying the result of a query• A query is evaluated by first applying the WHERE-clause, then GROUP BY and HAVING, and
finally the SELECT-clause
Chapter 8 SQL - The Relational Database Standard
Insert Statement
• In its simples form, it is used to add a single tuple to a relation. Attribute values should be listed in the same order as the attributes were specified in the CREATE TABLE command
U1: INSERT INTO EMPLOYEE
VALUES ('Richard ' , 'K', 'Marini', '653298653', '1962-12-30',
'98 Oak Forest, Katy, TX', 'M', 37000, '987654321', 4)
• An alternate form of INSERT specifies explicitly the attribute names that correspond to the values in the new tuple. Attributes with NULL values can be left out
Insert a tuple for a new EMPLOYEE for whom we only have values for FNAME, LNAME, and the SSN attributes
U1A: INSERT INTO EMPLOYEE (FNAME, LNAME, SSN)
VALUES ('Richard' , 'Marini', '653298653')
Chapter 8 SQL - The Relational Database Standard
Insert Statement
• Another variation of INSERT allows insertion of multiple tuples in a relation in a single command
Example: Suppose we want to create a temporary table that has the name, number of employees and total salaries for each department. A table DEPTS_INFO is created by U3A , and is loaded with the summary information retrieved from the database by the query in U3B
U3A: CREATE TABLE DEPTS_INFO
(DEPT_NAME VARCHAR(10),
NO_OF_EMPS INTEGER,
TOTAL_SAL INTEGER);
U3B: INSERT INTO DEPTS_INFO (DEPT_NAME, NO_OF_EMPS, TOTAL_SAL)
SELECT DNAME, COUNT (*), SUM (SALARY)
FROM DEPARTMENT , EMPLOYEE
WHERE DNUMBER = DNO
GROUP BY DNAME;
Chapter 8 SQL - The Relational Database Standard
DELETE Statement
• Removes tuples from a relation
• Includes a WHERE - clause to select the tuples to be deletedExamples :
U4A: DELETE FROM EMPLOYEE
WHERE LNAME='Brown'
U4B: DELETE FROM EMPLOYEE
WHERE SSN='123456789'
U4C : DELETE FROM EMPLOYEE
WHERE DNO IN ( SELECT DNUMBER
FROM DEPARTMENT
WHERE DNAME='Research' )
U4D : DELETE FROM EMPLOYEE
Chapter 8 SQL - The Relational Database Standard
UPDATE Statement
• Used to modify attribute values of one or more selected tuples• A WHERE-clause selects the tuples to be modified.• An additional SET-clause specifies the attributes to be modified and their new
values• Example : Change the location and controlling department number of project
number 10 to 'Bellaire' and 5, respectively.U5 : UPDATE PROJECT SET PLOCATION='Bellaire' , DNUM=5 WHERE PNUMBER=10Example : Give all employees in the 'Research' department a 10% raise in salary.U6 : UPDATE EMPLOYEE
SET SALARY=SALARY*1.1 WHERE DNO IN (SELECT DNUMBER
FROM DEPARTMENT WHERE DNAME='Research' )
Chapter 8 SQL - The Relational Database Standard
Views in SQL
• A view is a single virtual table that is derived from other base tables or views.• A view does not necessarily exist in physical form, which limits the possible update operations
that can be applied to views. • The CREATE VIEW command is used to specify a view by specifying a (virtual) table and a
defining query.• The view attribute names can be inherited from the tables in the defining query.• A view is not realized at the time of view definition, but rather at the time we specify a query on
the view.• Examples :
V1 : CREATE VIEW WORKS_ON1 AS SELECT FNAME,LNAME,PNAME,HOURS FROM EMPLOYEE , PROJECT, WORKS_ON WHERE SSN=ESSN AND PNO=PNUMBER ;V2 : CREATE VIEW DEPT_INFO(DEPT_NAME,NO_OF_EMPS,TOTAL_SAL) AS SELECT DNAME,COUNT(*),SUM(SALARY) FROM DEPARTMENT, EMPLOYEE WHERE DNUMBER=DNO GROUP BY DNAME ;V3: CREATE VIEW EMP_V AS SELECT FNAME,MINIT,LNAME,SSN,BDATE,ADDRESS,SEX,SUPERSSN,DNO FROM EMPLOYEE
Chapter 8 SQL - The Relational Database Standard
Views in SQL
• A view is removed using the DROP VIEW command.Example :V1A : DROP VIEW WORKS_ON1 ;V2A : DROP VIEW DEPT_INFO ;Views can also be used as a security and authorization mechanism (See chapter 20)
• Updating The Views :– A view update operation may be mapped in multiple ways to update operations on defining
base relations– The topic of updating views is still an active research area.– view update is unambiguous only if one update on the base relations can accomplish the
desired update effect on the view– If a view update can be mapped to more than one update on the underlying base relations,
we must have a certain procedure to choose the desired update– We can make the following general observations :
• A view with a single defining table is updatable if the view attributes contain the primary key• Views defined on multiple tables using joins are generally not updatable• Views defined aggregate functions are not updatable
Capter 9 ER- and EER-to-Relational Mapping, and Other Relational Languages
The Relational Calculus
• A formal language based on first-order predicate calculus• Many commercial relational languages based on some aspects of relational
calculus, including SQL.• QBE (Chapter 9) is closer to relational calculus than SQL. • Difference from Relational Algebra :
– One declarative calculus expression specifies a retrieval query – A sequence of operation is used in relational algebra– Relational Algebra is more procedural– Relational calculus is more declarative (less procedural)– Expressive power of the two languages is identical.
• Relational Completeness :– A relational query language L is relationally complete if we can express in L any
query that can be expressed in the relational calculus (or Algebra).– Most relational query languages are relationally complete– More expressive power is provided by operations such as aggregate functions,
grouping and ordering .
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Functional Dependencies and Normalization for Relational Database
• Informal Design Guidelines for Relational Databases– Semantics of the Relational Attributes– Redundant Information in Tuples and Update Anomalies– Null Values in Tuples– Spurious Tuples
• Functional Dependencies(FDs)– Definition of FD– Inference Rules for FDs– Equivalence of Sets of FDs– Minimal Sets of FDs
• Normal Forms Based on Primary Keys– Introduction to Normalization– First Normal Form– Second Normal Form– Third Normal Form
• General Normal Form Definitions(for Multiple Keys)• BCNF(Boyce-Codd Normal Form)
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Informal Design Guidelines for Relational Databases
• Guideline 1 (Semantics of the Relation Attributes)Design a relation schema so that it is easy to explain its meaning. Do not combine
attributes from multiple entity types and relationship types into a single relation. Intuitively, if a relation schema corresponds to one entity type or one relationship type, the meaning tends to be clear. Otherwise, the relation corresponds to a mixture of multiple entities and relationships and hence becomes semantically unclear.
• Bad design example
DNAMEDNOADDRESSBDATESSNLNAMEMINITFNAME
EMP_DEPT
MGRSSN
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Informal Design Guidelines for Relational Databases
• Guideline 2 (Redundant information in Tuples And Update Anomalies)Design the base relation schemas so that no insertion, deletion, or modification
anomalies are present in the relations. If any anomalies are present, note them clearly and make sure that the programs that update the database will operate correctly.
– Insertion anomalies• To insert a new employee, we must provide values for attributes of department
correctly so they are consistent among all employees who work for the same department.
• It is difficult to insert a new department that has no employee as yet.
– Deletion anomalies• When the last employee of a department is deleted, the information of the
department is lost.
– Modification anomalies• To change the value of an attribute of a department, we have to change all
employees who work for the department.
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Informal Design Guidelines for Relational Databases
• Guideline 3 (Null Values in Tuples)As far as possible, avoid placing attributes in a base relation whose values may
frequently be null. If nulls are unavoidable, make sure that they apply in exceptional cases only and do not apply to a majority of tuples in the relation.
– Null has different interpretations• The attribute does not apply to this tuple.
• The attribute value for this tuple is unknown.
• The value is known but absent; that is, it has not been recorded yet.
– Attributes that are NULL frequently could be placed in separate relations (with the primary key)
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Informal Design Guidelines for Relational Databases
• Guideline 4 (Spurious Tuples)Design relation schemas so that they can be JOINed with equality conditions on
attributes that are either primary keys or foreign keys in a way that guarantees that no spurious tuples are generated. Do not have relations that contain matching attributes other than foreign key-primary key combinations. If such relations are unavoidable, do not join them on such attributes, because the join may produce spurious tuples.
– Bad designs for a relational database may result in erroneous results for certain JOIN operations
– The "lossless join" property is used to guarantee meaningful results for join operations
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Informal Design Guidelines for Relational Databases
• Guideline 4 (Spurious Tuples)– Example
John B Smith 1111 xxx xxx 1 Research xxxFranklin T Wang 2222 xxx xxx 1 Research xxx
DNAMEDNOADDRESSBDATESSNLNAMEMINITFNAME
EMP_DEPT
MGRSSN
HOURSDNOPNAMEPNOESSN
WORKS_ON
1111 1 xxx 1 202222 2 xxx 1 30
FNAME MINIT LNAME SSN DNO DNAME SSN PNO DNO HOURSJohn B Smith 1111 1 Research 1111 1 1 20John B Smith 1111 1 Research 2222 2 1 30
Franklin T Wang 2222 1 Research 1111 1 1 20Franklin T Wang 2222 1 Research 2222 2 1 30
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Functional Dependencies
• Definition of FD– A set of attributes X functionally determines a set of attributes Y if the value of
X determines a unique value for Y• Written as X Y or can be displayed graphically on a relational schema
• For any two tuples t1 and t2 in any relation instance r(R): if t1[X] = t2[X] , then t1[Y] = t2[Y]
– An FD is a property of the attributes in the schema R
– If K is a key of R , then K functionally determines all attributes in R(since we never have two distinct tuples with t1[K] = t2[K] )
– Examples of FD constraints:• social security number determines employee name: SSN ENAME
• project number determines project name and location: PNUMBER{PNAME , PLOCATION}
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Functional Dependencies
• Inference Rules for FDs– Given a set of FDs, we can infer additional FDs that hold whenever the FDs in
F hold
– Armstrong's inference rules:• IR1. (Reflexive) If X Y , then XY
• IR2. (Augmentation) If X Y , then XZYZ
• IR3. (Transitive) If XY and Y Z , then X Z
• IR1 , IR2 ,IR3 form a sound and complete set of inference rules.
– Some additional inference rules that are useful:• (Decomposition) If X YZ ,then XY and X Z
• (Union)If X Y and X Z , then X YZ
• (Pseudotransitivity) If X Y and WY Z , then WX Z
• The last three inference rules, as well as any other inference rule can be deduced from IR1, IR2 , and IR3 (completeness property)
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Functional Dependencies
• Inference Rules for FDs– Closure of a set F of FDs, denoted as F+, is the set of all FDs that can be
inferred from F
– Armstrong's inference rules are sound and complete• Any FD that can infer from F using IR1, IR2, and IR3 holds in every relation.
• F+ can calculated by repeatedly applying IR1, IR2, and IR3 using the FDs in F
– Closure of a set of attributes X with respect to F, denoted as X+, is the set of all attributes that are functionally determined by X
– Algorithm 14.1: calculates X+ with respect to F
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Functional Dependencies
• Equivalence of sets of FDs– F covers G if every FD in G can be inferred from F (i.e., if G+ F+)
– Two sets of FDs F and G are equivalent if :• every FD in F can be inferred from G, and
• every FD in G can be inferred from F
– F and G are equivalent if F covers G and G covers F
– F and G are equivalent if F+ = G+
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Functional Dependencies
• Minimal Sets of FDs– A set of FDs is minimal if it satisfies the following conditions:
• Every dependency in F has a single attribute for its RHS
• We can not remove any dependency from F and have a set of dependencies that is equivalent to F.
• We can not replace any dependency X A in F with a dependency Y A , where YX and still have a set of dependencies that is equivalent to F.
– Every set of FDs has an equivalent minimal set (also called minimal cover)
– There can be several equivalent minimal sets
– Algorithm 14.2: Finding a minimal cover of a set of FDs F.
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Normal Forms Based on Primary Keys
• Introduction to Normalization– Normalization: Process of decomposing unsatisfactory "bad" relations by
breaking up their attributes into smaller relations
– Normal Form: Condition using keys and FDs of a relation to certify whether a relation schema is in a particular normal form
– 2NF, 3NF, BCNF based on keys and FDs of a relation schema.
– 4NF based on keys, MVDs; 5NF based on keys, JDs
– Additional properties may be needed to ensure a good relational design
– Primary attribute: a member of some candidate key.
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Normal Forms Based on Primary Keys
• First Normal Form– Disallows composite attributes, multi-valued attributes, and nested relations;
attributes whose values for an individual tuple are non-atomic
– Considered to be part of the definition of relation
– Example:
MGRSTARTDATEMGRSSNDNUMBERDNAME
DEPARTMENT
DLOCATIONS
MGRSTARTDATEMGRSSNDNUMBERDNAME
DEPARTMENT
DLOCATIONS
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Normal Forms Based on Primary Keys
• Second Normal Form– Prime Attribute: attribute that is a member of primary key K
– Full Functional Dependency: A FD Y Z , where removal of any attribute from Y means the FD does not hold any more.
Examples:
{ SSN, PNUMBER} HOURS is a full FD since neither SSN HOURS nor
PNUMBER HOURS hold
– A relation schema R is in second normal form (2NF) if every non-prime attribute A in R is fully functionally dependent on the primary key
– R can be decomposed into 2NF relations via the process of 2NF normalization
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Normal Forms Based on Primary Keys
• Second Normal Form
ENAME PNAME PLOCATIONHOURSPNOESSN
EMP_PROJ
HOURSPNOESSN
WORKS_ON
ENAMESSN
EMP
PNAMEPLOCATIONPNO
PROJECT
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
Normal Forms Based on Primary Keys
• Third Normal Form– Transitive Functional Dependency: a FD YZ that can be derived from two
FDs YX and XZ
– A relation schema R is in third normal form (3NF) if it is in 2NF and no non-prime attribute A in R is transitively dependent on the primary key
– R can be decomposed into 3NF relations via the process of 3NF normalization
ADDRESS DNUMBDATEENAMESSN
EMP_DEPT
DNAME DMGRSSN
ADDRESS DNUMBDATEENAMESSN
EMP
DNAME DMGRSSNDNUM
DEPT
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
General Normal Form Definitions
• General Normal Form Definitions (for Multiple Keys)– A relation schema R is in second normal form (2NF) if every non-prime
attribute A in R is fully functionally dependent on every key of R
– Superkey of relation schema R : a set of attributes S of R that contains a key of R
– A relation schema R is in third normal form (3NF) if whenever a FD X A holds in R , then either:
• (a) X is a superkey of R, or
• (b) A is a prime attribute of R
Chapter 14 Functional Dependencies and Nomalization for Relational Databases
BCNF (Boyce-Codd Normal Form)
• A relation schema R is in ( Boyce-Codd Normal Form) BCNF if whenever a FD X A holds in R, then X is a superkey of R
• Each normal form is strictly stronger than the previous one:• Every 2NF relation is in 1NF• Every 3NF relation is in 2NF• Every BCNF relation is in 3NF• There exist relations that are in 3NF but not in BCNF• The goal is to have each relation in BCNF (or 3NF)• Additional criteria may be needed to ensure that the set of relations in a relational
database are satisfactory ( see Chapter 14)• Lossless join property• Dependency preservation property• Additional normal forms are discussed in Chapter 14• 4NF ( based on multi-valued dependencies)• 5NF (based on join dependencies)
Chapter 15 Relational Database Design Algorithms and Further Dependencies
Relational Database Design Algorithms
• Normal Forms are not a sufficient criteria for a good design– Example : Any relation with two attributes is always in BCNF - so we can
create 2-attribute relations arbitrarily and get BCNF
• Additional conditions are needed to ensure a good design
• Relational Decomposition
• The Dependency Preservation Property
• The lossless Join Property
• Null Values and Dangling Tuples
Chapter 15 Relational Database Design Algorithms and Further Dependencies
Relational Decomposition
• We start with a universal relation schema R containing all the database attributes: R = { A1, A2, ....,An}
• The design goal is a decomposition D of R into m relation schemas: D = {R1, R2, ...,Rm}, where – Each relation Schema Ri contains a subset of the attributes of R.
– Every attribute in R should appear in at least one Ri.
Chapter 15 Relational Database Design Algorithms and Further Dependencies
The Dependency Preservation Property
• The database designers define a set F of functional dependencies that should hold on the attributes of R
• Decomposition should preserve the dependencies; informally, the collection of all dependencies that hold on the individual relations Ri should be equivalent to F. Formally:
– Define the projection of F on Ri, denoted by F(Ri),to be the set of FDs X Y in F+ such that (X Y) in Ri
– A decomposition D = {R1,R2,...,Rm} is dependency preserving if (F (R1) ... F(Rm))+ = F+– This property makes it possible to ensure that the FDs in F hold simply by ensuring that the
dependencies on each relation Ri hold individually.• Relational synthesis algorithm: decompose R into a dependency preserving decomposition D
= { R1,R2,...,Rm} with respect to F such that each Ri is in 3NF– Find a minimal set of FDs G equivalent to F– For each X of an FD X A in G, create a relation schema Ri in D with the attributes {X A1 A2
... Ak} where the Aj's are all the attributes appearing in an FD in G with X as left hand side.– If any attributes in R are not placed in any Ri, create another relation in D for these attributes.
• Problems:– must find a minimal cover G for F– no efficient algorithm for finding a minimal cover– several minimal covers can exist for F; the result of algorithm can be different depending on which is
chosen.
Chapter 15 Relational Database Design Algorithms and Further Dependencies
The lossless (Non-Additive) Join Property
• Informally, this property ensures that no spurious tuples appear when the relations in the decompositions are JOINed. Formally:
– A decomposition D = { R1,R2,...,Rm} of R has the lossless join property with respect to a set f of FDs if, for every relation instance r(R) whose tuples satisfy all the FDs in F, we have: (R1(r(R))* R2(r(R))*... * Rm(r(R))) = r(R)
– This condition ensures that whenever a relation instance r(R) satisfies F, no spurious tuples are generated by joining the decomposed relations r(Ri)
– Since we actually store the decomposed relations as base relations, this condition is necessary to generate meaningful results for queries involving JOINs
Chapter 15 Relational Database Design Algorithms and Further Dependencies
The lossless (Non-Additive) Join Property
• Algorithm 15.2 for testing whether a decomposition D satisfies the lossless join property with respect to a set of FDs.1. Create an initial matrix S with one row i for each relation Ri in D, and one column j for
each attribute Aj in R.2. Set S(i,j)=bij for all matrix entries. (each bij is a distinct symbol associated with indices
(i,j))3. For each row i representing relation schema Ri
For each column j representing attribute Aj if relation Ri includes attribute Aj, then set S(i, j) = a j
4. Repeat the following loop until a complete loop execution results in no changes to S For each functional dependency X Y in F for all rows in S which have the same symbols in the columns corresponding to attributes in X, make the symbols in each column that correspond to an attribute in Y be the same in all these
rows as follows: if any of the rows has an “a” symbol for column, set the other rows to that same “a” symbol in the column. If no “a” symbol exists for the attribute in any of the rows, choose one of the “b” symbols that appear in one of the rows for the attribute and set the other rows to that same “b” symbol in the column
5. If a row is made up entirely of “a” symbols, then the decomposition has the loss-less join property, otherwise it does not.
Chapter 15 Relational Database Design Algorithms and Further Dependencies
The lossless (Non-Additive) Join Property
• Algorithm 15.3: decomposing R into BCNF relations such that the decompositions have the lossless join property with respect to a set of FDs F on R1.Set D ¬ {R}2.while there is a relation schema Q in D that is not in BCNF dobegin
choose one q in D that is not in BCNF;find a FD X Y in Q that violates BCNF;replace Q in D by two relation Schemas (Q-Y) and (X Y)
end;• This is based on two properties of lossless join decomposition:
– The decomposition D = {R1, R2} of r has the lossless join property w.r.t. F if and only if either:the FD (R1 R2) (R1 -R2) is in F+, orthe FD (R1 R2) (R2 -R1) is in F+
– If D = {R1, R2,...,Rm} of R has the lossless join property w.r.t. F, and D1 = { Q1, Q2, ..,Qk} of Ri has the lossless join property w.r.t. Ri(F), then D = {R1, R2,..., Ri-1,Q1, Q2, ..,Qk , Rm} has the lossless join property w.r.t. F
Chapter 15 Relational Database Design Algorithms and Further Dependencies
The lossless (Non-Additive) Join Property
• There is no algorithm for decomposition into BCNF relations that is dependency preserving.
• A modification of the synthesis algorithm guarantees both the lossless join and dependency preserving properties but into 3NF relations, not BCNF
• Fortunately, many 3NF relations are also in BCNF
• Lossless Join and Dependency Preserving Decomposition into 3NF Relations (Algorithm 15.4):
– Find a minimal set of FDs G equivalent to F
– For each left hand side X of an FD X Y in G• create a relation schema Ri in D with the attributes {X A1 A2 ... Ak} where
Aj's are all the attributes appearing in all FDs in G with X as left hand side.
– If none of the relations in D contain a key of R, create a relation that contains a key of R and add it to D
Chapter 15 Relational Database Design Algorithms and Further Dependencies
Null Values and Dangling Tuples
• No fully satisfactory relational design theory that include null values
• Pay attention to nulls in foreign keys
• Dangling tuplesIf a relation schema is decomposed into multiple relations, an inner join between
these relations will not reproduce all turples in the original relation. Some tuples are lost and are called dangling turples
Chapter 19 Transaction Processing Concepts
Transaction Processing
• Basic Concepts
• Why Concurrent Control
• Why Recovery is Needed
• Transaction States
• The System Log
• Desirable Properties of Transaction
• Schedules and Recoverability
• Serializable Schedules
• Use of Serializability
• View Equivalence and View Serializability
Chapter 19 Transaction Processing Concepts
Basic Concepts
• Single user/Multi-user DBMSA DBMS is single user if at most one user at a time can use the system ; it is
multi-user if many users can use the system concurrently.
• If only a single CPU exists, concurrent execution of the programs is interleaved; If the system has multiple CPU, execution is simultaneous.
• TransactionA transaction is a logical unit of database processing that includes one or more
database operations (insertion, deletion, modification, retrieval).
• A simplified database model is used for explaining transaction processing concepts. The basic database access operations include :
– read_item(x): reads a database item named X into the program variable X.
– write_item(x): write the value of program variable X into the database item named x.
Chapter 19 Transaction Processing Concepts
Basic Concepts
• Executing a read_item(x) includes the following steps:– Find the address of the disk block that contains item x;– Copy that disk block into a buffer in main memory;– Copy item x from the buffer to the program variable named X.
• Executing a write_item(x) includes the following steps:– Find the address of the disk block that contains item x;– Copy that disk block into a buffer in main memory;– Copy item x from the program variable named X into its correct location in the buffer;– Store the updated block from the buffer back to disk.
• Example of concurrent transaction.
T1read_item(X)X=X-Nwrite_item(X)read_item(Y)Y=Y+Nwrite(item(Y)
T2read_item(X)X=X+Mwrite_item(X)
Chapter 19 Transaction Processing Concepts
Why Concurrent Control is Needed
• Several problems can occur when concurrent transactions execute in an uncontrolled manner.
– The lost update problem : This occurs when two transactions that access the same database items have their operations interleaved in a way that makes the value of same database item incorrect.
– The temporary update (or dirty read) problem : This occurs when one transaction updates a database item and then the transaction fails for some reason. The updated item is accessed by another transaction before it is changed back to its original value.
– The incorrect summary problem : If one transaction is calculating an aggregate function on a number of records while other transaction is updating some of these records, the aggregate function may calculate some values before they are updated and others after they are updated.
• Whenever a transaction is submitted to a DBMS for execution, the system must make sure that :
– All the operations in the transaction are completed successfully and their effect is recorded permanently in the database; or
– the transaction has no effect whatever on the database or on the other transactions in the case of that a transaction fails after executing some of operations but before executing all of them.
Chapter 19 Transaction Processing Concepts
Why Concurrent Control is Needed
T2
read_item(X)X=X+M
write_item(X)
T1read_item(X)X=X-N
write_item(X)read_item(Y)
Y=Y+Nwrite(item(Y)
T2
read_item(X)X=X+Mwrite_item(X)
T1read_item(X)X=X-Nwrite_item(X)
read_item(Y)
T3rum=0read_item(A)sum=sum+A
read_item(X)sum=sum+Xread_item(Y)sum=sum+Y
T1
read_item(X)X=X-Nwrite_item(X)
read_item(Y)Y=Y+Nwrite(item(Y)
Chapter 19 Transaction Processing Concepts
Why Recovery is Needed
• Possible reasons for a transaction to fail in the middle of execution– System Crash : If the hardware crashes, the contents of main memory
may be lost.
– Transaction or system Error.
– Local error or exception conditions detected by the transaction
– Concurrency control enforcement.
– Disk failure during read or write operation of the transaction.
– Physical Problems and catastrophes.
• Whenever a failure occurs, the system must keep sufficient information to recover from the failure.
Chapter 19 Transaction Processing Concepts
Transaction Concepts
• A transaction is an atomic unit of work that either completed in its entirety or not done at all. For recovery purposes, the recovery manager needs to keep track of the following operations:
– BEGIN_TRANSACTION
– READ or WRITE
– END_TRANSACTION
– COMMIT_TRANSACTION
– ROLLBACK (or ABORT)
• States of transaction execution :– Active state : transaction starts execution, and issue READ and WRITE operations.
– Partially committed state : checked by concurrency control and recovery protocol.
– Committed state : concludes the transaction execution successfully.
– Fail state : If it is aborted during active state or one of checks fails during partially committed state.
– Terminated state: indicates the transaction left the system.
Chapter 19 Transaction Processing Concepts
The System Log
• To recover form transaction failures, the system maintains a log, which keeps track of all transaction operations that affect the values of database items. It is kept on disk, and periodically backed up to tape. – [ start_transaction , T ]
– [ write_item, T, X, old_value, new_value ]
– [ read_item, T,X]
– [ commit, T]
– [ abort, T]
Chapter 19 Transaction Processing Concepts
Commit Point of Transaction
• A transaction T reaches its commit point when all its operations that access the database have been executed successfully and the effect of all operations on the database has been recorded in the log. The transaction writes an entry [ commit , T] into log.
• Force-writing the log file before committing a transaction.
• A [checkpoint] record is written into the log periodically at that point when the system writes out to the database on disk the effect of all WRITE operations of committed transactions.
Chapter 19 Transaction Processing Concepts
Desirable Properties of Transaction
• The desirable properties of atomic transactions (ACID properties)– Atomicity : A transaction is an atomic unit of processing. the recovery may
need to undo the effects of the transaction if a transaction fails.
– Consistency preservation : A correct execution of the transaction must keep consistent state (database program or DBMS module)
– Isolation : A transaction should not make its updates visible to other transactions until it is committed. (Concurrency control).
– Durability or permanency : Once a transaction is committed, its changes to the database must never be lost because of subsequent failure. (recovery)
Chapter 19 Transaction Processing Concepts
Schedules of Recoverability
• Schedules of Transactions– A schedule S of n transactions T1,T2,...,Tn is an ordering of the operations of the
transactions subject to the constraint that, for each transaction Ti, the operations of Ti in S must appear in the same order in which they occur in Ti (Total ordering)
– The schedule of Figure 19.3(a) Sa : r1(X); r2(X); w1(X);r1(Y);w2(X);c2; w1(Y);c1;
– The schedule of Figure 19.3(b) : Sb : r1(X); w1(X); r2(X); w2(X); c2; r1(Y); a1 ;
– Two operations in a schedule are said to conflict if they belong to different transactions, and if they access the same item X, and if one of the two operations is a write_item(X).
– A schedule S of n transaction T1, T2, ..., Tn is said to be a complete schedule if the following conditions hold :
• The operations in S are exactly those operations in T1, T2, ..., Tn, including a commit or abort operations as the last operation for each transaction in the schedule.
• For any pair of operations from the same transaction Ti, their order of appearance in S is the same as their order of appearance in Ti.
• For any two conflicting operations, one of the two must occur before the other in the schedule– Committed projection C(S) of the schedule S includes only the operations in S that
belong to committed transactions.
Chapter 19 Transaction Processing Concepts
Schedules and Recoverability
• Characterizing Schedules Based on recoverability :– A schedule S is recoverable if no transaction T in S commits until all
transaction T' that have written an item that T reads have committed. – A transaction T is said to read from transaction T' in a schedule S if some
item X is first written by T' and later read by T.Sc : r1(X); w1(X);r2(X), r1(Y); w2(X); c2 ; a1; is not recoverable.Sd : r1(X); w1(X); r2(X),r1(Y);w2(X); w1(Y) ; c2; c1; is recoverable.
– Cascading rollback: an uncommitted transaction has to be rolled back because it read an item from a transaction that failed.
Se : r1(X); w1(X); r2(X), r1(Y); w2(X); w1(Y); a1, a2;
– A schedule is said to be cascadeless (avoid cascading rollback) if every transaction in the schedule reads only items that were written by committed transactions.
– Strict schedule: transactions can neither read nor write an item X until the last transaction that wrote X has committed (or aborted).
Sf : w1(X,5) ; w2(X,8) ; a1;
Chapter 19 Transaction Processing Concepts
Serialiable Schedules
• Serializable SchedulesA schedule is serial if, for every transaction T participating the schedule, all the operations of
T are executed consecutively in the schedule. Otherwise it is called non-serial schedule. • Every serial schedule is considered correct; some nonserial schedules give
erroneous results.• A schedule S of n transactions is serializable if it is equivalent to some serial
schedule of the same n transactions; a nonserial schedule which is not equivalent to any serial schedule is not serializable.
• The definition of two schedules considered “equivalent”:– result equivalent: producing same final state of the database (is not used)– conflict equivalent: If the order of any two conflicting operations is the same in both
schedules.– view equivalent: If each read operation of a transaction reads the result of the same write
operation in both schedules and the write operations of each transaction must produce the same results.
• Conflict serializable: if a schedule S is conflict equivalent to some serial schedule. we can reorder the non-conflicting operations in S until we form the equivalent serial schedule, and S is a serializable schedule.
Chapter 19 Transaction Processing Concepts
Testing for Conflict Serializability
• Precedence graph (Serialization graph)A precedence graph is a directed graph G = (N,E) that consists of a set of nodes N
= { T1, T2, ...,Tn} and a set of directed edges E = {e1, e2,..,em}. There is one node in the graph for each transaction Ti in the schedule. Each edge ei in the graph is of the form (Tj Tk) , if one of the operations in Tj appears in the schedule before some conflicting operation in Tk. The edges can optionally be labeled by the names of data item that lead to creating the edge.
• Algorithm– If there is a cycle in the precedence graph, schedule S is not serializable.
– If there is no cycle in the precedence graph, we can create an equivalent serial schedule S' that is equivalent to S by ordering the transactions that participate in S using topological sorting.
Chapter 19 Transaction Processing Concepts
Uses of Serializability
• A serial schedule represents inefficient processing because no interleaving of operations from different transactions is permitted.
• A serializable schedule gives us the benefits of concurrent execution without loosing any correctness.
• It is particularly impossible to determine how the operations of a schedule will be interleaved beforehand to ensure serializability.
• It is impractical that testing for serializability after transactions are executed, and then cancel the effect of non-serializable schedule.
• The most practical approach is to ensure serializability without having to test the schedules to themselves for serializability after they are executed.
• Using the theory of serializability to determine protocols or sets of rules which are followed by every individual transaction or enforced by a DBMS concurrency control system. (Chapter 20)
– Two-phase locking– Timestamp ordering
Chapter 19 Transaction Processing Concepts
View Equivalence and View Serializability
• Less restrictive definition.• Two schedules are said to be view equivalent if the following three conditions
hold:– The same set of transactions participate in S and S'; and S and S' include the same
operations of those transactions.– For any operation ri(X) of Ti in S, the value of X read by the operation has been written
by an operation Wj(X) of Tj ( or if it is the original value of X before the schedule started), the same condition must hold for the value of X read by operation ri(X) of Ti in S'.
– If the operation wk(Y) of Tk is the last operation to write item Y in S, then wk(Y) of Tk must also be the last operation to write item Y in S'.
• A schedule S is said to be view serializable if it is view equivalent to a serial schedule.
• Constrained Write assumption: any write operation wi(X) in Ti is preceded by a ri(X) in Ti, and that the value written by Wi(X) in Ti depends only on the value of X read by ri(X).(under this assumption, conflict serializability and view serializability are similar)
Chapter 19 Transaction Processing Concepts
Transaction Support in SQL
• Every transaction has certain characteristics attributed to it– Access mode: Read only / Read write
– Diagnose area sizeto indicate the number of conditions that can be held simultaneously in the diagnose area.
– Isolation level• Read uncommitted
• Read committed
• Repeatable read
• Serializable
• Violations– Dirty read
– Non-repeatable read
– Phantoms
Dirty read Nonrepeatable read
phantems
Read uncommitted y y y
Read committed n y y
Repeatable read n n y
serializable n n n
Chapter 20 Concurrency Control Techniques
Concurrency Control
• Locking Techniques
• Concurrency Control Based on Timestamp Ordering
• Multi-version Concurrency Control Techniques
• Validation Concurrency Control Techniques
• Granularity of Data Items
• Some Other Concurrency Control Issues
Chapter 20 Concurrency Control Techniques
Locking Techniques
• LockA lock is a variable associated with a data item in the database and describes the
status of that item with respect to possible operations that can be applied to the item. Generally, there is one lock for each data item.
• Binary lock
• Shared and exclusive lock
• Two Phase Locking
• Dead locks
• Dead Lock Detection and Timeout
Chapter 20 Concurrency Control Techniques
Binary Lock
• A binary lock has two states: locked(=1) , and unlocked (=0). the value of lock associated with data item X is denoted as Lock(X).
• Two operations, lock_item and unlock_item. They are implemented as critical sections, that is, no interleaving is allowed until the operation terminates.
• A binary lock enforces mutual exclusion on the data item.• A transaction requests access to an item X by issuing a lock_item(X). When the transaction is
through using the item, it issues an unlock_item (X). The DBMS has a lock manager subsystem to keep track of and control access to locks. Every transaction must obey the following rules:
– A transaction T must issue the operation lock_item(X) before any read_item(X) or write_item(X) operations are performed in T
– A transaction T must issue the operation unlock_item(X) after all read_item(X) and write_item(X) operations are completed in T
– A transaction T will not issue a lock_item(X) operation if it already holds the lock on item X– A transaction T will not issue an unlock_item(X) operation unless it already holds the lock on item X
• Between the lock_item(X) and unlock_item(X) in the transaction T, T is said to hold the lock on item X. At most, one transaction can hold the lock on a particular item. No two transactions can access the same item concurrently.
• The system maintains a lock table, which includes a record with two fields: < data item name, LOCK> f
Chapter 20 Concurrency Control Techniques
Shared and Exclusive Lock
• Multi-mode LockA multiple-mode lock has three possible states: “read-locked”, “write-locked”, or “unlocked”. A read-
locked item is called share-locked, whereas a write-locked item is called exclusive-locked. There are three operations:
• The system keeps track of the number of transactions the hold a shared lock on an item. Each lock can be a record with three fields: <date item name, LOCK, no_of_reads>
• The system must enforce the following rules:– A transaction T must issue the operation read_lock(X) or write_lock(X) before any read_item(X) is
performed in T– A transaction T must issue the operation write_lock(X) before any write)item(X) is performed in T.– A transaction T must issue the operation unlock(X) after all read_item(X) and write_item(X) operations
are completed in T– A transaction T will not issue a read_lock(X) operation if it already holds a read(shared) lock or
write(exclusive) lock on item X– A transaction T will not issue a write_lock(X) operation if it already holds a read(shared) lock or
write_exclusive) lock on item X– A transaction T will not issue an unlock(X) operation unless it already holds a read(shared) lock or
write(exclusive) lock on item X.• It is possible to relax conditions 4 and 5 to downgrade and upgrade the lock. (Conversion of
locks)• Using binary locks or multiple-mode locks does not guarantee serializability of schedules.
Chapter 20 Concurrency Control Techniques
Two-Phase Locking
• Guaranteeing Serializability by two-phase LockingA transaction is called to follow two-phase locking protocol if all locking operations ( read_lock,
write_lock) precede the first unlock operation in the transaction.
• Transaction can be divided into two phases:– Expending Phase, during which new locks on items can be acquired but no locks can be released;– Shrinking Phase, during which existing locks can be released but no new locks can b acquired.
• It can be proved that , if every transaction in a schedule follows the two-phase locking protocol, the schedule is guaranteed to be serializable.
• Two-phase locking may limit the amount of concurrency that can occur in a schedule. This is the price for guaranteeing serializabilty of all schedules without having to check the schedules themselves.
– A transaction T may not be able to release an item X after it is through using it if T needs lock other items later. X must remain locked by T until all other items that T needs have been locked.
– A transaction T must lock an item Y before it needs it so that it can release previous locked item X. Meanwhile, another transaction seeking to access Y has to wait even though T is not using yet.
• Conservative 2PL: a deadlock-free lock. A transaction locks all the items it accesses before the transaction begins execution, by pre-declaring its read set and write set.
• Strict 2PL: guarantees strict schedules. A transaction does not release any of its locks until after it commits or aborts.
Chapter 20 Concurrency Control Techniques
Dead locks
• Dead lockDeadlock occurs when each of two transactions is waiting for the other to release the lock on
the item. Meanwhile, neither can proceed to unlock the item that the other is waiting for.• Deadlock prevention protocols:
– Conservative 2PL: Every transaction locks all the items it needs in advance; if any of the items can not be obtained, none of the items are locked. This solution limits concurrency.
– Timestamp (TS(T)): a unique identifier assigned to each transaction based on the order of in which they are started.
– wait-die: if TS(Ti)< TS(Tj)(Ti is older than Tj), then Ti is allowed to wait, otherwise abort Ti(Ti dies) and restart it later with the same timestamp
– wound-wait: if TS(Ti)< TS(Tj)(Ti is older than Tj), then abort Tj(Ti wounds Tj) and restart it later with the same timestamp , otherwise Ti is allowed to wait.
– no waiting algorithm: If transaction is unable to obtain a lock, it is immediately aborted and then restarted after a certain time delay without checking whether a deadlock will actually occur or not. This may cause transactions to abort and restart needlessly.
– cautious waiting algorithms: if Tj is not blocked (not waiting for some other locked item), then Ti is blocked and allowed to wait, otherwise abort Ti . The cautious waiting rules reduce the number of needless aborts/restarts. Cautious waiting is deadlock free since the blocking times form a total ordering on all blocked transactions.
Chapter 20 Concurrency Control Techniques
Dead Lock Detection and Timeout
• Using deadlock detection:periodically check to see if the system is in a state of deadlock. This solution is attractive if
there will be little interference among the transactions. Otherwise, it is advantageous to use a deadlock prevention protocol.
• Construct a wait-for graph to detect the state of deadlock. Each node is created for each transaction , and there is a directed edge(Ti Tj) , if Ti is waiting to lock the item X that is currently locked by Tj.
• We have a state of deadlock if and only if the wait-for graph has a cycle.• When deadlock occurs, some of transactions causing the deadlock must be aborted.
Choosing which transaction to abort is known as victim selection. The algorithm should avoid selecting transactions that have been running for a long time and performed many updates.
• Starvation occurs if the algorithm for dealing with deadlock selects the same transaction as victim repeatedly, thus causing it to abort and never finish execution. The wit-die and wound-wait can avoid this problem. The standard solution is to have a fair waiting scheme, such as, first-come-first-serve queue.
Chapter 20 Concurrency Control Techniques
Concurrency Control Based on Timestamp Ordering
• TimestampsA timestamp is a unique identifier by the DBMS to identify a transaction. TS(T)
are assigned in the order in which the transactions are submitted to the system. Timestamp can be created using a counter or a system clock.
• The Timestamp Ordering Algorithm.We order the transaction based on their timestamps so that a schedule in which the
transactions participate is serializable, and equivalent serial schedule has the transactions in order of their timestamp values. The TO algorithm associates with each database item X two timestamp values:
• 1. read_TS(X): the read timestamp of item X; this is the largest timestamp among all the timestamps of transactions that have successfully read item X.
• 2. write_TS(X): the write timestamp of item X; this is the largest of all the timestamps of transactions that have successfully written item X.
Chapter 20 Concurrency Control Techniques
Concurrency Control Based on Timestamp Ordering
• Basic TO algorithm: Guarantees serializability– Transaction T issues a write_item(X) operation: If read_TS(X) > TS(T) or if write_TS(X) >
TS(T) then abort and roll back T and reject the operation. Otherwise, execute the write_item(X) operation of T and set write_TS(X) to TS(T).
– Transaction T issues a read_item(X) operation: If write_TS(X) > TS(T), then abort and roll back T and reject the operation. If write_TS(X) TS(T), then execute the read_item(X) operation of T and set read_TS(X) to the larger of TS(T) and the current read_TS(X).
– Cascading rollback: If T is aborted and rolled back, any transaction T1 that may have used a value written by T must also be rolled back. Similarly, any transaction T2 that may have used a value written by T1 must also be rolled back, and so on.
• Thomas’s write rule: does not force conflict serializability.– If read_TS(X) > TS(T), then abort and rollback T and reject the operation.– If write_TS(X) > TS(T) , then do not execute the write operation but continue processing. – If neither the condition in part a nor the condition in part b occurs, then execute the
write_item(X) operation of T and set write_TS(X) to TS(T)• Strict TO: Ensures that schedules are both strict and conflict serializable:
A transaction T issues a read_item(X) or write_item(X) such that TS(T) > write_TS(X) has its read or write operation delayed until the transaction T' that wrote the value of X has committed or aborted.
Chapter 20 Concurrency Control Techniques
Multiversion Concurrency Control Techniques
• Multiversion Cocurrency Control– Protocols that keep the several versions of a data item. The algorithms use the concept of
view serializability. More storage is needed to maintain multiple versions.
• Multiversion Technique Based on Timestamp Ordering– The system keeps several versions X1, X2, ...,Xk of each data item X. The following two
timestamps are kept for each version Xi: read_TS(Xi): The read timestamp of Xi and write_TS(Xi): The write_timestamp of Xi.
– To ensure serializability, the two rules are used to control the reading and writing of data items:
• If transaction T issues a write_item(X) operation, and version i of x has the highest write_TS(Xi) of all versions of X that is also less than or equal to TS(T), and read_TS(Xi)>TS(T), then abort and roll back transaction T; otherwise , create a new version Xj of X with read_TS(Xj) = write_TS(Xj) = TS(T).
• If transaction T issues a read_item(X) operation, find the version i of X that has the highest write_TS(Xi) of all versions of X that is also less than or equal to TS(T); then return the value of Xi to transaction T and set the value of read_TS(Xi) to the larger of TS(T) and the current read_TS(Xi).
Chapter 20 Concurrency Control Techniques
Multiversion Concurrency Control Techniques
• Multiversion Two-phase Locking– Reads can proceed concurrently with a write operation.– There are three locking modes for a data item: read, write, and certify. The state of an
item X can be one of “read locked”, “write locked”, “certify locked” and “unlocked”.– Lock Compatibility Table
read write read write certifyread y n read y y nwrite n n write y n n certify n n n
– Two versions, X old and X new, are kept for each item X. X old has been written by some committed transaction. X new is created when a transaction T acquires a write lock on the item X. Other transactions can continue to read X old while T holds the write lock.
– T must obtain a certify lock, on all items that it currently holds write locks on before it can commit. The cost is that T may have to delay its commit until it obtains exclusive certify locks on all the items it has updated.
– X old is set to X new, and X new is discarded and the certify locks are then released.
Chapter 20 Concurrency Control Techniques
Validation Concurrency Control Techniques
• Validation Concurrency ControlDuring transaction execution, all updates are applied to local copies of the data items ( they are not
applied directly to the database items). At the end of transaction execution, a validation phase checks whether any of the updates violate serializability. If not, the transaction is committed and the database is updated from the local copies, otherwise the transaction is aborted.
• Three phases for this protocol:– The read phase: A transaction can read values of data items from the database However, updates are
applied only to local copies of the data items kept in the transaction workspace.– The validation phase: Checking is performed to ensure that serializability will not be violated if the
transaction updates are applied to the database.– The write phase: If the validation phase is successful, the transaction updates are applied to the
database; otherwise, the updates are discarded and the transaction is restarted.• Optimistic techniques work well under circumstances of little interference among the
transactions.• In the validation phase for transaction Ti, if any one of the following conditions holds, Ti does
not interfere with any committed transactions, Tj(or with any other transactions currently in their validation phase).
– Transaction Tj completes its write phase before Ti starts its read phase.– Ti starts its write phase after Tj completes its write phase, and the read_set of Ti has no items in
common with the write_set of Tj.– Both the read_set and the write_set of Ti have no items in common with the write_set of Tj , and Tj
completes its read phase before Ti completes its read phase.
Chapter 20 Concurrency Control Techniques
Granularity of Data Items
• The size of the data items is called the data item granularity. A database item could be one of the following:– A database record.
– A field value of database record.
– A disk block
– A whole file.
– A whole database.
• Several trade-offs must be considered in choosing the data item size.– The larger the data item size is, the lower is the degree of concurrency
permitted.
– The smaller the data item size is, the more items will exist in the database It causes a higher overhead.
• What is the best size? It depends on the types of transaction involved
Chapter 20 Concurrency Control Techniques
Some Other Concurrent Control Issues
• Insertion, Deletion, And Phantom records:– Phantom problem can occur when a new record that is being inserted by some
transaction T satisfies a condition that a set of records accessed by another transaction T' must satisfy. The record that causes the conflict is a phantom record, which may not b recognized by the concurrency control protocol.
– One solution is index locking, which can be used with a two-phase locking protocol. If the index entry is locked before the record itself can be accessed, then the conflict on the phantom record can be detected since the index locks conflict.
• Interactive Transactions– Problem: A user can input a value of a data item to a transaction T that is based
on some value written to the screen by transaction T that is based on some value written to the screen by transaction T', which may not have committed.
– An approach is to postpone output of transactions to the screen until they have committed.
Chapter 21 Database Recovery Techniques
Database Recovery
• Recovery Concepts
• Recovery Techniques Based on Deferred Updates
• Recovery Techniques Based on Immediate Updates
• Shadow Paging
Chapter 21 Database Recovery Techniques
Recovery Concepts
• Recovery Outline– Recovery from transaction failures :
The database is restored to some past state so that a correct state (which is close to the time of failure) can be reconstructed from that past state. The system keeps information about the changes to data items in the system log.
– There are different strategies for the catastrophic and non-catastrophic failures:• Catastrophic failure
The recovery method restores a past copy of the database that was dumped to archival storage (typically tape) and reconstructs a more current state by reapplying or redoing committed transaction operations from the log up to the time of failure.
• Non-catastrophic failure of types 1 through 4 (system crash, transaction error, local error, concurrency control enforcement)
The strategy is to reverse the changes that caused the inconsistency by undoing some operations. It may also be necessary to redo some operations in order to restore a consistent state of the database, as we shall see. In this case, we do not need a complete archival copy of the database. Rather , the entries kept in the system log are consulted during recovery.
Chapter 21 Database Recovery Techniques
Recovery Concepts
• Recovery Outline– There are two main techniques for recovery from non-catastrophic transaction
failures:• Deferred Update ( NO-UNDO/REDO algorithm):
Updates are not recorded in the database until after a transaction reaches its commit point; before commit, all updates are recorded in the local workspace; during commit, the updates are first recorded in the log and then written to the database.
• Immediate Update ( UNDO/REDO algorithm):The database may be updated by some operations before a transaction reaches its commit
point; these operations are recorded in the log on the disk by force-writing before they are applied to the database.
Chapter 21 Database Recovery Techniques
Recovery Concepts
• System concepts for recovery– Disk pages: Contains one data item
– DBMS cache: a collection of in-memory buffer.
– A cache directory• ( item name, buffer location ) is used to keep track of which database items are in
buffers.
• Associated with each item in the cache is a dirty bit, which can be included in the directory , to indicate whether or not the item has been modified.
– Before image: the old value of data item before updating.
– After image: the new value after updating.
Chapter 21 Database Recovery Techniques
Recovery Concepts
• System concepts for recovery– In-place updating: overwriting the old value of data item on the disk.
Write-ahead logging protocol:– The before image of an item can not be overwritten by its after image on the disk until all
UNDO-type log records for the updating transaction up to this point in time have been force-written to the disk.
– The commit operation of a transaction can not be completed until all the REDO-type and UNDO-type log records for that transaction have been force written to disk.
– shadowing: write a new item at a different disk location, so multiple copies of a data item can be maintained.
• DBMS recovery subsystem maintains a number of lists to make the recovery process more efficient.
• active transaction list;
• committed transaction list;
• aborted transaction list.
Chapter 21 Database Recovery Techniques
Recovery Concepts
• Transaction Rollback– The log entries are used to recover the old values of the data items that must be
rolled back. Most recovery mechanisms are designed to avoid cascading rollback.
– read_item operation entries in the log are needed only for determining cascading rollback.
Chapter 21 Database Recovery Techniques
Recovery Techniques Based on Deferred Update
• A typical deferred update protocol:– A transaction can not change the database until it reaches the commit point.– A transaction does not reach its commit point until all its update operations are recorded
in the log and the log is force written to the disk.– The REDO is needed in case the system fails after the transaction commits but before all
its changes are recorded in the database.• Recovery Using Deferred Update in a Single User Environment
– The algorithm of redoing certain write_item operations:PROCEDURE RDU_S Use two lists of transactions: the committed transactions since the last
check-point , and the active transactions(at most one transaction will fall in this category, because the system is single user). Apply the REDO operation to all the write_item operations of the committed transactions from the log in the order in which they were written to the log. Restart the active transactions.
– The REDO procedure is defined as follows:REDO(WRITE_OP) Redoing a write_item operation WRITE_OP consists of examining its log
entry [write_item, T, X, new_value] and setting the value of item X in the database to new_value, which is the after image(AFIM).
– The REDO operation is required to be indempotent so that the result of recovery from a crash during recovery should be the same as the result of recovering when there is no crash during recovery.
– The transaction in the active list are ignored completely by the recovery process.
Chapter 21 Database Recovery Techniques
Recovery Techniques Based on Deferred Update
• Deferred Update with Concurrent Execution in a Multi-user Environment– The recovery algorithm is depend on the protocol for concurrency control.– PROCEDURE (for strict two phase locking) RDU_M Use two lists of transactions
maintained by the (WITH CHECKPOINTS) system: the committed transaction T since the last checkpoint, and the active transaction T'. REDO all the WRITE operations of the committed transactions from the log, in the order in which they were written to the log. The transactions that are active and did not commit are effectively canceled and must be resubmitted.
– We can start from the end of the log to make the algorithm more efficient based on the observation: if a data item X has been updated more than once by the committed transactions, it is only necessary top REDO the last update of X from the log during recovery.
– Draw back: it limits the concurrent execution of transactions because all items remain locked until the transaction reaches its commit point.
– Benefit:• A transaction does not record its change in the database until it reaches its commit point-that is,
until it completes its execution successfully. Hence , a transaction is never rolled back because of failure during transaction execution.
• A transaction will never read the value of an item that is written by an uncommitted transaction , because items remain locked until a transaction reaches its commit point. Hence, no cascading rollback will occur.
Chapter 21 Database Recovery Techniques
Recovery Techniques Based on Deferred Update
• Transaction Actions That Do Not Affect the Database.– A common method is to issue the commands that generate the reports but keep
them as batch jobs, which are executed only after the transaction reaches its commit point.
Chapter 21 Database Recovery Techniques
Recovery Techniques Based on Immediate Update
• An update operation must be recorded in the log(disk) before it is applied to the database so that we can recover in case of failure (write-ahead logging protocol).
• Recovery technique includes the capability to rollback a transaction by undoing the effect of its write_item operations.
• UNDO/REDO Recovery Based on Immediate Update in a Single User environment– PROCEDURE RIU_S
• Use two lists of transactions maintained by the system: the committed transactions since the last checkpoint, and the active transactions(at most one transaction will fall in this category, because the system is single-user).
• Undo all the write_item operations of the active transaction from the log, using the UNDO procedure described hereafter.
• Redo all the write_item operations of the committed transaction from the log , in the order in which they were written in the log, using the REDO procedure
– UNDO (WRITE_OP) Undoing a write_item operation WRITE_OP consists of examining its log entry [ write_item, T, X, old_value, new_value] and setting the value of item X in the database to old_value which is the before image(BFIM). Undoing a number of write_item operations from one or more transactions from the log must proceed in the reverse order from the order in which the operations were written in the log.
Chapter 21 Database Recovery Techniques
Recovery Techniques Based on Immediate Update
• UNDO/REDO Immediate Update With Concurrent Execution– PROCEDURE RUI_M
• Use two lists of transactions maintained by the system: the committed transactions since the last check point, and the active transactions.
• Undo all the write_item operations of the active (uncommitted) transactions, using the UNDO procedure. The operations should be undone in the reverse of the order in which they were written in the log.
• Redo all the write_item operations of the committed transactions from the log, in the order in which they were written into the log.
Chapter 21 Database Recovery Techniques
Shadow Paging
• A page table is kept in the main memory. When a transaction begins executing, the current page table is copied into a shadow page table, which is then saved on the disk. During transaction execution, the shadow page table is never modified.
• When a write_item operation is performed, a new copy of the modified database page is created, which is written on a new disk block, and then the current page table is modified to point to the new disk block. The two versions are kept: the old version is referenced by the shadow page table, and the new version by the current page table.
• To recover from a failure during transaction execution:– free modified pages;– the shadow page table is reinstated so that it becomes the current page table.– discard the current page table;
• Committing a transaction:– discard the previous shadow page table;– free the old pages.
• Advantage: no need to undo and redo. This technique can be categorized as a NO-UNDO/NO-REDO recovery technique.
• Disadvantage: overhead to maintain the page tables in the memory and database pages on the disk.
• This recovery scheme does not require the log in a single-user environment. In multi-user environment, the log is needed for the concurrency control method.
Object Database
• Object Oriented Database (OODB)– ODMG (Object Data Management Group) Standard
– Players: Ontos, ObjectStore
• Object-Relational Database Systems– ADTs
– SQL3
Dimensional Modeling
Introduction
• Dimensional Modeling Approach– Seeks user understandability, query performance, and resilience to change
• Data Warehouse– Bill Inmon -- Data warehouse is a subject-oriented, nonvolatile, time variant collection of
data in support of management decisions.– Ralph Kimball
• Typical Functionalities of Data Warehouse– Roll-up: Data is summarized with increasing generalization (e.g., weekly to quarterly to
annually). – Drill-down: Increasing levels of detail are revealed (the complement of roll-up). – Pivot: Cross tabulation (also referred as rotation) is performed. – Slice and dice: Performing projection operations on the dimensions. – Sorting: Data is sorted by ordinal value. – Selection: Data is available by value or range. – Derived (computed) attributes: Attributes are computed by operations on stored and
derived values.
Database Requirements
• OLTP (or Operational Database) – Data Capture
• Short time frame, rapid change data
• Entered/updated by DB users
• High data input performance
– Data Retrieval• Record-level access
• Predictable usage pattern
• OLAP (or Data Warehouse)– Data Capture
• Long time frame, static data• Entered by IT
– Data Retrieval• Summarizing huge blocks of data• Less predictable usage pattern• High query performance
Data Modeling
• Entity-Relationship Modeling– Model the structure of data
– Use normalization to facilitate data capture and storage
– Used in operational database design
• Dimensional Modeling – Model the semantics of data
– Use de-normalization to facilitate data presentation
– Used in data warehouse design
Multidimensional Database
• ROLAP/MOLAP
• Hypercube
• Advantage – high query performance
• Disadvantage – Large dataset size
– Low load performance
• Suitable for Data Mart
BusTruck
CarSUV
Q1
Q2
Q3
Q4
N E S W
Dimensional Modeling Overview
• Present Data in a Standard Framework
• Distinguish Roles of Data– Attributes – Descriptive data
– Facts – Measured values
• Dimensional Framework– Categorize Attributes into Dimensions
– Categorize Facts into Fact Tables
Star Schema
• Fact Table– Foreign keys– Facts
• Dimension Tables– Primary key– Attributes
• Use Surrogate Keys – Simple integers– Improve understandability by avoiding meaningful keys– Improve performance
Star Schema and Report
Sales Fact
time_keystore_keyproduct_keypromotion_keydollarsunitscost
Time Dimension
time_key
SQL_dateday_of_weekweek_numbermonthmore attributes
Store dimension
store_key
store_idstore_nameaddressdistrictregion
Product dimension
product_key
SKUdescriptionbrandcategorypackage_typesizeflavor
Promotion dimension
promotion_key
prmotion_namepromotion_typeprice_treamentad_treamentdisplay_treatmentcoupon_type
District Brand Total Dollars Total Cost Gross Profit Atherton Clean Fast $ 1,233 $ 1,058 $ 175 Atherton More Power $ 2,239 $ 2,200 $ 39 Atherton Zippy $ 848 $ 650 $ 198 Belmont Clean Fast $ 2,097 $ 1,848 $ 249 Belmont More Power $ 2,428 $ 2,350 $ 78 Belmont Zippy $ 633 $ 580 $ 53
Dimensional Modeling Process
• Design Conformed Dimensions
• Establish Conformed Fact Definitions
• For Each Subject Area– Decide the granularity of the fact table
– Choosing dimensions
– Choosing facts
Snowflake Schema
• Snow Flaking – Move low cardinality attributes in a dimension to separate table (outrigger)
• Save Space
• Complicate Data Presentation
Snowflake Schema Example
Sales Fact
time_keystore_keyproduct_keypromotion_keydollarsunitscost
Product dimension
product_key
brand_keypackage_type_keySKUdescriptionsizeflavor
Brand Outrigger
brand_key
category_keybrandbrand_description
Category Outrigger
category_key
categorycategory_desc
Package Type Outrigger
package_type_key
package_type
Approaches to Common Modeling Situations
• Slowly Changing Dimensions• Role-Playing Dimensions• Many-to-Many Dimensions• Rapidly Changing Monster Dimensions• Hierarchical Dimensions• Degenerate Dimensions• Junk Dimensions• Fact-less Fact Tables• Facts of Different Granularity
Slowly Changing Dimension
• Modify the dimension record
• Save old value of an attribute in a separate column
• Create new record
Role-Playing Dimension
• DefinitionA single dimension appears multiple times in the same star
• Create multiple logical dimensions (Roles)– Based on the same dimension table
– Each role has different column names
Many-to-Many Dimension
• DefinitionEach fact table record corresponds to multiple dimension records
• Create role-playing dimensions
• Create bridge table
Aggregates
• Contain summarized data from fact tables
• Dramatically improve query performance by redirecting user queries to aggregates
• Used in dimensional modeling
• Cost– More disk space
– Maintaining aggregate metadata
Aggregate Example
Sales Fact
time_keystore_keyproduct_keypromotion_keydollarsunitscost
Time Dimension
time_key
SQL_dateday_of_weekmore attributes
Store dimension
store_key
store_idstore_nameaddressdistrictregion
Product dimension
product_key
SKUdescriptionbrandcategorypackage_typesizeflavor
Promotion dimension
promotion_key
prmotion_namepromotion_typeprice_treamentad_treamentdisplay_treatmentcoupon_type
Sales Aggregate
time_keydistrict_keybrand_keydollarsunitscost
Time Dimension
time_key
SQL_dateday_of_weekmore attributes
District Rollup
district_key
districtregion
Brand Rollup
brand_key
brandcategory
Redirected query:
SELECT district as District, brand as Brand, SUM(dollars) as ‘Total Dollars’, SUM(cost) ‘Total Cost’, SUM(dollars)-SUM(cost) as ‘Gross Profit’
FROM Sales_Aggregate f, District_Rollup s, Brand_Rollup p
WHERE f.brand_key=p.brand_key and f.district_key=s.district_key
GROUP BY district, brand
Original Query:
SELECT district as District, brand as Brand, SUM(dollars) as ‘Total Dollars’, SUM(cost) ‘Total Cost’, SUM(dollars)-SUM(cost) as ‘Gross Profit’
FROM Sales_Fact f, Store_dimension s, Product_dimension p
WHERE f.product_key=p.product_key and f.store_key=s.store_key
GROUP BY district, brand
Aggregate Overview
• Decide Aggregation Level• Choose Dimensions• Roll Dimensions Up
Create new dimension (rollup) by removing high cardinality attributes from a dimension
• Summarize Base Fact Table• Aggregate Navigation
– Maintain aggregate Metadata– Rewrite user query to access aggregate table