1 © Raúl García-Castro
ESWC 2010 Tutorial on Evaluation of Semantic Web Technologies
Evaluating the conformance and interoperability
of semantic technologies Raúl García Castro
Ontology Engineering Group Laboratorio de Inteligencia Artificial
Facultad de Informática Universidad Politécnica de Madrid
30th May 2010
Evaluating conformance and interoperability. May 30th 2010 2 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Table of contents
• Evaluating conformance • Evaluating interoperability • Test data
– RDF(S) Import Test Suite – OWL Lite Import Test Suite – OWL DL Import Test Suite
• Running the evaluations – IBSE – SEALS Platform
• Conclusions
CONFORMANCE
Evaluating conformance and interoperability. May 30th 2010 3 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Conformance in the Semantic Web
• Conformance is the ability that semantic technologies have to adhere to existing specifications – In terms of ontology representation languages (RDF(S), OWL, etc.)
• Different types of conformance, regarding the ontology language: – Knowledge model – Serialization – Semantics
• Conformance is a primary requirement for semantic technologies: – Tool validation – Feature analysis
CONFORMANCE
Evaluating conformance and interoperability. May 30th 2010 4 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Conformance evaluation
• Goal: to evaluate the conformance of semantic technologies with regards to ontology representation languages
• Applicability: – Only requirement: that the tool is able of importing and
exporting ontologies in the ontology language
4
Tool X
Step 1: Import + Export
O1 = O1’’ + α - α’
O1 O1’ O1’’
CONFORMANCE
Evaluating conformance and interoperability. May 30th 2010 5 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Metrics
• Execution informs about the correct execution: – OK. No execution problem – FAIL. Some execution problem – Platform Error (P.E.) Platform exception
• Information added or lost in terms of triples.
• Conformance informs whether the ontology has been processed correctly with no addition or loss of information: – SAME if Execution is OK and Information added and
Information lost are void – DIFFERENT if Execution is OK but Information added
or Information lost are not void – NO if Execution is FAIL or P.E.
Oi = Oi’ + α - α’
Oi = Oi’ ?
CONFORMANCE
Evaluating conformance and interoperability. May 30th 2010 6 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Table of contents
• Evaluating conformance • Evaluating interoperability • Test data
– RDF(S) Import Test Suite – OWL Lite Import Test Suite – OWL DL Import Test Suite
• Running the evaluations – IBSE – SEALS Platform
• Conclusions
INTEROPERABILITY
Evaluating conformance and interoperability. May 30th 2010 7 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Interoperability in the Semantic Web
• Interoperability is the ability that Semantic Web technologies have to interchange ontologies and use them – At the information level; not at the system level – In terms of knowledge reuse; not information integration
• In the real world it is not feasible to use a single system or a single formalism
• Different behaviours in interchanges between different formalisms:
Same formalism A B disjoint
A B disjoint
Different formalism
A B disjoint
C subclass
disjoint subclass
C subclass
A B
C subclass subclass
A B
C myDisjoint myDisjoint
A B
C
LOSS
LESS
A B
LOSS
INTEROPERABILITY
Evaluating conformance and interoperability. May 30th 2010 8 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Interoperability evaluation
• Goal: to evaluate the interoperability of semantic technologies in terms of the ability that such technologies have to interchange ontologies and use them
• Applicability: – Only requirement: that the tool is able of importing and exporting ontologies
in the ontology language
8
Tool X Tool Y
Step 1: Import + Export O1 = O1’’ + α - α’
Step 2: Import + Export O1’’=O1’’’’ + β - β’
Interchange
O1 = O1’’’’ + α - α’ + β - β’
O1 O1’ O1’’ O1’’’ O1’’’’
INTEROPERABILITY
Evaluating conformance and interoperability. May 30th 2010 9 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Metrics
• Execution informs about the correct execution: – OK. No execution problem – FAIL. Some execution problem – Platform Error (P.E.) Platform exception – Not Executed. (N.E.) Second step not executed
• Information added or lost in terms of triples.
• Interchange informs whether the ontology has been interchanged correctly with no addition or loss of information: – SAME if Execution is OK and Information added and
Information lost are void – DIFFERENT if Execution is OK but Information added
or Information lost are not void – NO if Execution is FAIL, N.E., or P.E.
Oi = Oi’ + α - α’
Oi = Oi’ ?
INTEROPERABILITY
Evaluating conformance and interoperability. May 30th 2010 10 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Table of contents
• Evaluating conformance • Evaluating interoperability • Test data
– RDF(S) Import Test Suite – OWL Lite Import Test Suite – OWL DL Import Test Suite
• Running the evaluations – IBSE – SEALS Platform
• Conclusions
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 11 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
General principles
• Only simple ontologies • Only correct ontologies • Use the RDF/XML syntax • Small number of tests
11
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 12 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Table of contents
• Evaluating conformance • Evaluating interoperability • Test data
– RDF(S) Import Test Suite – OWL Lite Import Test Suite – OWL DL Import Test Suite
• Running the evaluations – IBSE – SEALS Platform
• Conclusions
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 13 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
rdfs:Resource
rdfs:Container rdf:List rdf:Property rdfs:Class rdf:Statement
rdfs:Datatype
rdfs:Literal
rdf:XMLLiteral rdf:Bag rdf:Seq rdf:Alt rdfs:ContainerMembershipProperty
rdfs:member rdfs:seeAlso rdfs:isDefinedBy rdfs:value “property” rdfs:label
rdfs:comment
rdf:subject rdf:predicate rdf:object rdf:type
rdfs:subclassOf
rdfs:domain rdfs:range
rdfs:subPropertyOf
rdf:first rdf:rest
RDF(S) Import Test Suite
Goal: To define tests for “all” the possible relations between the components of the RDF(S) knowledge model.
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 14 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
What is a relation?
rdfs:Resource rdfs:Literal rdfs:label
But also…
rdfs:Class rdfs:Literal rdfs:label
rdfs:Resource subclass
rdfs:Resource rdfs:XMLLiteral rdfs:label
rdfs:Literal subclass
rdfs:label • rdfs:domain: rdfs:Resource • rdfs:range: rdfs:Literal
component1 component2 relation1
Instances of component1 can be related to instances of component2 using the property relation1.
Example:
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 15 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Design principles
Only consider components commonly used in tools: • Classes • Instances • Properties • Literals • Class hierarchies • Property hierarchies
Cover cardinalities of 0, 1 and 2.
rdf:Property rdfs:Class rdfs:domain * *
Beware of cardinalities!
Define tests from the RDF(S) knowledge model
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 16 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Types of tests b) Import all the possible combinations of
two components with a property a) Import single components
c) Import combinations of more than two components that usually appear together in RDF(S) graphs
d) RDF(S) graphs with the different variants of the RDF/XML syntax
rdfs:Resource
rdf:Property rdfs:Class rdf:Statement rdfs:Literal
rdfs:label rdfs:comment rdf:subject
rdf:predicate rdf:object
rdf:type
rdfs:domain rdfs:range
rdfs:Resource
rdf:Property rdfs:Class rdf:Statement rdfs:Literal
rdfs:label rdfs:comment rdf:subject
rdf:predicate rdf:object
rdf:type
rdfs:domain rdfs:range
rdfs:Resource
rdf:Property rdfs:Class rdf:Statement rdfs:Literal
rdfs:label rdfs:comment rdf:subject
rdf:predicate rdf:object
rdf:type
rdfs:domain rdfs:range
<rdf:Description rdf:about="#class1"> <rdf:type rdf:resource="&rdfs;Class" /> </rdf:Description>
<rdfs:Class rdf:about="#class1"> </rdfs:Class>
=
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 17 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
RDF(S) Import Test Suite
RDF/XML Syntax variants <rdf:Description rdf:about="#class1"> <rdf:type rdf:resource="&rdfs;Class"/> </rdf:Description>
<rdfs:Class rdf:about="#class1"> </rdfs:Class>
=
RDF(S) component combinations
Group No. Components Class 2 rdfs:Class Metaclass 5 rdfs:Class, rdf:type Subclass 5 rdfs:Class, rdfs:subClassOf Class and property 6 rdfs:Class, rdf:Property, rdfs:Literal Property 2 rdf:Property Subproperty 5 rdf:Property, rdfs:subPropertyOf Property with domain and range
24 rdfs:Class, rdf:Property, rdfs:Literal, rdfs:domain, rdfs:range
Instance 4 rdfs:Class, rdf:type Instance and property
14 rdfs:Class, rdf:type, rdf:Property, rdfs:Literal
Syntax and abbreviation
15 rdfs:Class, rdf:type, rdf:Property, rdfs:Literal
TOTAL 82
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 18 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Table of contents
• Evaluating conformance • Evaluating interoperability • Test data
– RDF(S) Import Test Suite – OWL Lite Import Test Suite – OWL DL Import Test Suite
• Running the evaluations – IBSE – SEALS Platform
• Conclusions and future work
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 19 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Design principles
Define tests from the OWL (Lite) Abstract Syntax
axiom ::= 'Class(' classID ['Deprecated'] modality { annotation } { super } ')' modality ::= 'complete' | 'partial' super ::= classID | restriction axiom ::= 'EquivalentClasses(' classID classID { classID } ')' axiom ::= 'Datatype(' datatypeID ['Deprecated'] { annotation } )'
Example: Class descrip0ons
Cover all the productions and symbols
We cover cardinalities of 0, 1 and 2. Limit the number of tests
super ::= classID | restriction
axiom ::= 'EquivalentClasses(' classID classID { classID } ')’
• super ::= class01 • super ::= restriction
axiom ::= 'EquivalentClasses(' classID classID ')’
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 20 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
OWL Lite Import Test Suite
RDF/XML Syntax variants <rdf:Description rdf:about="#class1"> <rdf:type rdf:resource="&rdfs;Class"/> </rdf:Description>
<rdfs:Class rdf:about="#class1"> </rdfs:Class>
=
Component combinations
Subclass of class Subclass of restriction Value constraints
Cardinality + object property
Cardinality + datatype property
Set operators
Group No. Class hierarchies 17 Class equivalences 12 Classes defined with set operators 2 Property hierarchies 4 Properties with domain and range 10 Relations between properties 3 Global cardinality constraints and logical property characteristics
5
Single individuals 3 Named individuals and properties 5 Anonymous individuals and properties 3 Individual identity 3 Syntax and abbreviation 15 TOTAL 82
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 21 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Table of contents
• Evaluating conformance • Evaluating interoperability • Test data
– RDF(S) Import Test Suite – OWL Lite Import Test Suite – OWL DL Import Test Suite
• Running the evaluations – IBSE – SEALS Platform
• Conclusions
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 22 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Design principles
Define tests from the OWL (DL) Abstract Syntax
Cover all the productions and symbols
Limit the number of tests
Increase exhaustiveness
Put user in the loop
To maximize the coverage of the knowledge model.
Defining tests should be: • Simple • Extensible • Parameterized
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 23 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Macro and Test
Definitions (CSV file)
Test Suite
Metadata
ontology01.owl
ontology02.owl
ontology03.owl
…
Keyword-based test generator
Interpreter
Keyword executor
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 24 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Parameterize generation
• Examples: – “…for every type of class description” – “…using all the built-in annotation properties” – “…starting from a depth of 500 and to a depth of 5.000” – …
Macro and Test
Definitions (CSV file)
Test Generator Interpreter
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 25 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Extracting keywords
description ::= classID | restriction | 'unionOf(' { description } ')' | 'intersectionOf(' { description } ')' | 'complementOf(' description ')' | 'oneOf(' { individualID } ')'
restriction ::= 'restriction(' datavaluedPropertyID dataRestrictionComponent { dataRestrictionComponent } ')' | 'restriction(' individualvaluedPropertyID individualRestrictionComponent { individualRestrictionComponent } ')’
Example: Class descrip0ons
Keyword Parameter1 Parameter2 Parameter3 Parameter4
createNamedClass resultId className
createClassEnumerated resultId origClassId individualId1 individualId2
createClassAllValuesFromRestriction resultId origClassId propertyId classId
createClassSomeValuesFromRestriction resultId origClassId propertyId classId
createClassHasValueRestriction resultId origClassId propertyId value
createClassCardinalityRestriction resultId origClassId propertyId cardinality
createClassMinCardinalityRestriction resultId origClassId propertyId cardinality
createClassMaxCardinalityRestriction resultId origClassId propertyId cardinality
createClassIntersection resultId origClassId classId1 classId2
createClassUnion resultId origClassId classId1 classId2
createClassComplement resultId origClassId classId
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 26 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Defining macros
Benefits: • Easily build new tests • Define complex patterns
createObjectPropertyDomainAndRange descriptionId propertyId classId1 classId2
createObjectProperty descriptionId propertyId addPropertyDomain descriptionId classId1 addPropertyRange descriptionId classId2
MACRO:
Defini0on:
createNamedClassWithLabel descriptionId classId
createNamedClass descriptionId classId addAnnotationLiteral descriptionId rdfs:label classId@en
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 27 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
OWL DL Import Test Suite
• Three types of tests: – Simple combinations of components
• Class | property | individual descriptions • Class | property | individual axioms • Property characteristics • Data ranges • Annotation properties
– Combinations of components that usually appear together • Properties with domain and range • Individuals and properties
– Restrictions in the use of components • Cardinalities greater than 1 • Class descriptions as object • Class descriptions as subject
561 test cases!
TEST DATA
Evaluating conformance and interoperability. May 30th 2010 28 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Table of contents
• Evaluating conformance • Evaluating interoperability • Test data
– RDF(S) Import Test Suite – OWL Lite Import Test Suite – OWL DL Import Test Suite
• Running the evaluations – IBSE – SEALS Platform
• Conclusions
RUNNING
Evaluating conformance and interoperability. May 30th 2010 29 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Test and result representation
• Test Suite ontology – Conformance Test Suite ontology – Interoperability Test Suite ontology
• Test Output ontology – Conformance Test Output ontology – Interoperability Test Output ontology
Test
TestSuite
OntologyDocument
xsd:string
rdfs:domain rdfs:range
hasAuthor
hasVersion
rdfs:domain
rdfs:range
belongsTo
rdfs:domain rdfs:range
hasId
rdfs:domain rdfs:range
isLocatedAtURL
hasOntologyName
hasOntologyNamespace
hasRepresentationLanguage
Legend:
ConformanceTestSuite
TestSuite
rdfs:subClass
ConformanceTest
Test
rdfs:subClass
rdfs:domain
rdfs:range
belongsToConformanceTS
xsd:string
rdfs:domain rdfs:range
coversOntologyLanguage
rdfs:domain
rdfs:range
coversOntologyLanguageFeature
OntologyDocument
usesOntologyDocument
RUNNING
Evaluating conformance and interoperability. May 30th 2010 30 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
The IBSE tool
Describe tests
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl="http://www.w3.org/2002/07/owl#"
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" arkOntology#" arkOntology#"> <owl:Ontology rescription of the benchmark suite inputs.</rdfs:comment> <owl:versionInfo>24 October 2006</owl:versionInfo> </owl:Ontology> <!-- classes -->
Generate reports
Execute tests
Test descriptions
Execution results
Tools
Reports (HTML, SVG)
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl="http://www.w3.org/2002/07/owl#"
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" arkOntology#" arkOntology#"> <owl:Ontology rescription of the benchmark suite inputs.</rdfs:comment> <owl:versionInfo>24 October 2006</owl:versionInfo> </owl:Ontology> <!-- classes -->
OWL Lite Import Test
Suite
1
2 3
benchmarkOntology
rdf:type
resultOntology
rdf:type
… • Automatically executes tests between all the tools • Allows configuring different execution parameters • Uses ontologies to represent tests and results • Depends on external ontology comparers (Jena + Pellet and RDF-utils)
http://knowledgeweb.semanticweb.org/benchmarking_interoperability/ibse/
RUNNING
Evaluating conformance and interoperability. May 30th 2010 31 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
SEALS Service Manager
Runtime Evaluation
Service
SEALS Portal
Test Data Repository
Service
Tools Repository
Service
Results Repository
Service
Evaluation Descriptions
Repository Service
SEALS Repositories
Entity management
requests
Evaluation requests
The SEALS Platform RUNNING
Evaluating conformance and interoperability. May 30th 2010 32 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Table of contents
• Evaluating conformance • Evaluating interoperability • Test data
– RDF(S) Import Test Suite – OWL Lite Import Test Suite – OWL DL Import Test Suite
• Running the evaluations – IBSE – SEALS Platform
• Conclusions
CONCLUSIONS
Evaluating conformance and interoperability. May 30th 2010 33 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Are there any results?
• RDF(S) Interoperability Benchmarking
• OWL (Lite) Interoperability Benchmarking
• Results: – Per tool – Global – Evolution over time
• Summary:
SemTalk (Frames) (OWL)
IRIBA http://knowledgeweb.semanticweb.org/iriba/
http://knowledgeweb.semanticweb.org/benchmarking_interoperability/owl/2008-07-06_Results/
http://fusion.cs.uni-jena.de/professur/research/activities/docs/ESWC09%20Tutorial%20-%2002%20Interoperability.pdf
CONCLUSIONS
Evaluating conformance and interoperability. May 30th 2010 34 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Conclusions
Methods for evaluating conformance and interoperability • Common for different semantic technologies • Problem-focused instead of tool-focused • Provides data about other characteristics (e.g., robustness)
Resources for evaluating conformance and interoperability • All the test suites, software and results are publicly available • Independent of:
– The interchange language – The input ontologies
Keyword-based test definition + Automatic test execution • Affordable for evaluators (end users, developers, etc.) • Test definition at large scale • Need effective tests, which requires effort • Result analysis is still hard
CONCLUSIONS
Evaluating conformance and interoperability. May 30th 2010 35 © Raúl García-Castro
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
SEALS Yardsticks for Ontology Management
3 evaluation scenarios: • OET Conformance 2010 • OET Interoperability 2010 • OET Scalability 2010
5 evaluation datasets • RDF(S) Import Test Suite • OWL Lite Import Test Suite • OWL DL Import Test Suite • OWL Full Import Test Suite • Scalability Test Suite
Timeline: • May 2010: Registration opens • May-June 2010: Evaluation materials and documentation are provided to participants • July 2010: Participants upload their tools • August 2010: Evaluation scenarios are executed • September 2010: Evaluation results are analysed • November 2010: Evaluation results are discussed in a workshop
http://www.seals-project.eu/seals-evaluation-campaigns/ontology-engineering-tools
Join the evaluation campaign!
CONCLUSIONS
36 © Raúl García-Castro
Thank you for your attention!