+ All Categories
Home > Documents > Ontology Feature Extraction via Vector Learning Algorithm and … · 2016. 2. 28. · on various...

Ontology Feature Extraction via Vector Learning Algorithm and … · 2016. 2. 28. · on various...

Date post: 08-Feb-2021
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
10
Ontology Feature Extraction via Vector Learning Algorithm and Applied to Similarity Measuring and Ontology Mapping Meihui Lan, Jian Xu, and Wei Gao Abstract—In recent years, many learning technologies have been applied in ontology similarity measuring and ontology mapping via learning an ontology function f : V R which maps an ontology graph to the real line. In these settings, all the information for an ontology vertex (corresponding to concept) is expressed as a vector. However, in a special application, the value of ontology function for each ontology vertex is deter- mined by a few components of the vector. The aim of feature extraction for ontology vector is to obtain these components to fix the index set of the vector, and such a procedure is equivalent to learning an ontology sparse vector in which most components are zero. In this paper, we raise an ontology sparse vector learning model for ontology similarity measuring and ontology mapping in terms of SOCP. The balance term consists of Ω norm, and the directed acyclic graph is employed in ontology setting for backward and forward procedure. Then, the active index set algorithm is designed to moderate the value of p, thus applications will be extended. Finally, five experiments are presented on various fields to verify the efficiency of the new ontology algorithm for ontology similarity measuring and ontology mapping in multidisciplinary research. Index Terms—ontology, similarity measure, ontology map- ping, ontology sparse vector, second order cone programming I. I NTRODUCTION O NTOLOGY is derived from philosophy to describe the natural connection of things and the inherently hidden connections of their components. In information and computer science, ontology is often taken as a model for knowledge storage and representation. It has shown exten- sive applications in a variety of fields, such as: knowledge management, machine learning, information systems, image retrieval, information retrieval search extension, collaboration and intelligent information integration. Since a few years ago, because of its efficiency as a conceptually semantic model and an analysis tool, ontology has been favored by researchers from pharmacology science, biology science, medical science, geographic information system and social sciences (for instance, see Przydzial et al., [1], Koehler et al., [2], Ivanovic and Budimac [3], Hristoskova et al., [4], and Kabir [5]). Manuscript received June 18, 2015; revised August 21, 2015. This work was supported in part by the Key Laboratory of Educational Informatization for Nationalities, Ministry of Education, the National Natural Science Foundation of China (60903131). M. H. Lan is with Department of Computer Science and Engineer- ing, Qujing Normal University, Qujing, 655011, China, e-mail: lan- [email protected]. J. Xu is with Department of Computer Science and Engineering, Qujing Normal University, Qujing, 655011, China, e-mail: [email protected]. W. Gao is with the School of Information and Technology, Yunnan Normal University, Kunming, 650500, China, e-mail: [email protected]. The structure of ontology is usually represented as a sim- ple graph by researchers. We make every concept in ontology correspond to a vertex, so do the objects and elements. Then each (directed or undirected) edge on an ontology graph symbolizes a relationship (or potential link) between two concepts (objects or elements). Let O be an ontology and G be a simple graph corresponding to O. It can be attributed to getting The similarity calculating function, the nature of ontology engineer application, can be used to compute the similarities between ontology vertices. These similarities rep- resent the intrinsic link between vertices in ontology graph. The ontology similarity measuring function is obtained by measuring the similarity between vertices from different ontologies, which is the goal of ontology mapping. The mapping serves as a bridge connecting different ontologies, through which a potential association between the objects or elements from different ontologies is gained. Or rather, the semi-positive score function Sim : V × V R + ∪{0} maps each pair of vertices to a non-negative real number. These years, ontology technologies have shown extensive applications in various fields. Ma et al., [6] presented a technology for stable semantic measurement based on the graph derivation representation. Li et al., [7] raised an on- tology representation method for online shopping customers knowledge in enterprise information. By means of processing expert knowledge from external domain ontologies and in terms of novel matching tricks, Santodomingo et al., [8] raised a creative ontology matching system which gives complex correspondences. Pizzuti et al., [9] described the main features of the food ontology and some examples of application for traceability purposes. Lasierra et al., [10] argued that ontologies can be used to design an architecture for monitoring patients at home. More ontology applications on various engineering can refer to [11], [12], [13] and [14]. Using ontology learning algorithm is a good way to solve the ontology similarity computation, and a ontology function f : V R can be obtained. After using the ontology function, the ontology graph is mapped into a line which is made up of real numbers. The similarity between two concepts then can be measured by comparing the difference between their corresponding real numbers. Dimensionality reduction is the essence of this idea. A vector can be used to express all its information, in order to associate the ontology function with ontology application, for vertex v. And then, we slightly confuse the notations and use v to denote both the ontology vertex and its corresponding vector with the purpose to facilitate the representation. The vector is mapped to a real number by a dimensionality reduction operator, ontology function f : V R, which maps multi- IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02 (Advance online publication: 29 February 2016) ______________________________________________________________________________________
Transcript
  • Ontology FeatureExtraction via Vector LearningAlgorithm and Applied to Similarity Measuring

    and Ontology MappingMeihui Lan, Jian Xu, and Wei Gao

    Abstract—In recent years, many learning technologies havebeen applied in ontology similarity measuring and ontologymapping via learning an ontology function f : V → R whichmaps an ontology graph to the real line. In these settings, all theinformation for an ontology vertex (corresponding to concept)is expressed as a vector. However, in a special application, thevalue of ontology function for each ontology vertex is deter-mined by a few components of the vector. The aim of featureextraction for ontology vector is to obtain these components tofix the index set of the vector, and such a procedure is equivalentto learning an ontology sparse vector in which most componentsare zero. In this paper, we raise an ontology sparse vectorlearning model for ontology similarity measuring and ontologymapping in terms of SOCP. The balance term consists ofΩnorm, and the directed acyclic graph is employed in ontologysetting for backward and forward procedure. Then, the activeindex set algorithm is designed to moderate the value ofp,thus applications will be extended. Finally, five experimentsare presented on various fields to verify the efficiency of thenew ontology algorithm for ontology similarity measuring andontology mapping in multidisciplinary research.

    Index Terms—ontology, similarity measure, ontology map-ping, ontology sparse vector, second order cone programming

    I. I NTRODUCTION

    ONTOLOGY is derived from philosophy to describethe natural connection of things and the inherentlyhidden connections of their components. In information andcomputer science, ontology is often taken as a model forknowledge storage and representation. It has shown exten-sive applications in a variety of fields, such as: knowledgemanagement, machine learning, information systems, imageretrieval, information retrieval search extension, collaborationand intelligent information integration. Since a few yearsago, because of its efficiency as a conceptually semanticmodel and an analysis tool, ontology has been favored byresearchers from pharmacology science, biology science,medical science, geographic information system and socialsciences (for instance, see Przydzial et al., [1], Koehler etal., [2], Ivanovic and Budimac [3], Hristoskova et al., [4],and Kabir [5]).

    Manuscript received June 18, 2015; revised August 21, 2015. This workwas supported in part by the Key Laboratory of Educational Informatizationfor Nationalities, Ministry of Education, the National Natural ScienceFoundation of China (60903131).

    M. H. Lan is with Department of Computer Science and Engineer-ing, Qujing Normal University, Qujing, 655011, China, e-mail: [email protected].

    J. Xu is with Department of Computer Science and Engineering, QujingNormal University, Qujing, 655011, China, e-mail: [email protected].

    W. Gao is with the School of Information and Technology, YunnanNormal University, Kunming, 650500, China, e-mail: [email protected].

    The structure of ontology is usually represented as a sim-ple graph by researchers. We make every concept in ontologycorrespond to a vertex, so do the objects and elements. Theneach (directed or undirected) edge on an ontology graphsymbolizes a relationship (or potential link) between twoconcepts (objects or elements). LetO be an ontology andGbe a simple graph corresponding toO. It can be attributedto getting The similarity calculating function, the nature ofontology engineer application, can be used to compute thesimilarities between ontology vertices. These similarities rep-resent the intrinsic link between vertices in ontology graph.The ontology similarity measuring function is obtained bymeasuring the similarity between vertices from differentontologies, which is the goal of ontology mapping. Themapping serves as a bridge connecting different ontologies,through which a potential association between the objects orelements from different ontologies is gained. Or rather, thesemi-positive score functionSim : V ×V → R+∪{0} mapseach pair of vertices to a non-negative real number.

    These years, ontology technologies have shown extensiveapplications in various fields. Ma et al., [6] presented atechnology for stable semantic measurement based on thegraph derivation representation. Li et al., [7] raised an on-tology representation method for online shopping customersknowledge in enterprise information. By means of processingexpert knowledge from external domain ontologies and interms of novel matching tricks, Santodomingo et al., [8]raised a creative ontology matching system which givescomplex correspondences. Pizzuti et al., [9] described themain features of the food ontology and some examples ofapplication for traceability purposes. Lasierra et al., [10]argued that ontologies can be used to design an architecturefor monitoring patients at home. More ontology applicationson various engineering can refer to [11], [12], [13] and [14].

    Using ontology learning algorithm is a good way to solvethe ontology similarity computation, and a ontology functionf : V → R can be obtained. After using the ontologyfunction, the ontology graph is mapped into a line whichis made up of real numbers. The similarity between twoconcepts then can be measured by comparing the differencebetween their corresponding real numbers. Dimensionalityreduction is the essence of this idea. A vector can be usedto express all its information, in order to associate theontology function with ontology application, for vertexv.And then, we slightly confuse the notations and usev todenote both the ontology vertex and its corresponding vectorwith the purpose to facilitate the representation. The vectoris mapped to a real number by a dimensionality reductionoperator, ontology functionf : V → R, which maps multi-

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • dimensional vectors into one-dimensional vectors.There are several effective methods of getting efficient

    ontology similarity measure or ontology mapping algorithmin terms of ontology function. Wang et al., [15] consideredthe ontology similarity calculation in terms of ranking learn-ing technology. Huang et al., [16] raised the fast ontologyalgorithm in order to cut the time complexity for ontologyapplication. Gao and Liang [17] presented an ontology opti-mizing model in which the ontology function is determinedby virtue of NDCG measure, and it is successfully appliedin physics education. Since large parts of ontology structurecan be tree-shaped, researchers explored the learning theoryapproach for ontology similarity calculating and ontologymapping in specific setting when the structure of ontologygraph has no cycle. In the multi-dividing ontology setting,all vertices in ontology graph or multi-ontology graph aredivided intok parts corresponding to thek classes of rates.The rate values of all classes are determined by experts. Inthis way, a vertex in a ratea has larger score than anyvertex in rateb (if 1 ≤ a < b ≤ k) under the multi-dividing ontology functionf : V → R. Finally, the similaritybetween two ontology vertices corresponding to two concepts(or elements) is judged by the difference of two real numberswhich they correspond to. Hence, the multi-dividing ontologysetting is suitable to get a score ontology function for anontology application if the ontology is drawn into a non-cycle structure.

    In this article, we present a new ontology learning algo-rithm for ontology similarity measuring and ontology map-ping by means of SOCP (second order cone programming).The rest of the paper is arranged as follows: in Section 2,detailed description of setting and notations for our ontologyproblem is manifested; in Section 3, we obtain the mainalgorithm for ontology index set algorithm based on SOCP;in Section 4, five respective simulation experiments on plantscience, humanoid robotics, biology, physics education anduniversity application are designed to test the efficiency ofour new ontology algorithm, and the data results indicatethat our algorithm has a high precision ratio for theseapplications.

    II. SETTING AND NOTATIONS

    Let V ⊂ Rd (d ≥ 1) be a vertex space (or the instancespace) for ontology graph, and the vertices (or, instances)in V are drawn randomly and independently according tosome (unknown) distribution. Given a training setS ={v1, · · · , vn} of size n in V , the goal of ontology learningalgorithms is to obtain a score functionf : V → R, whichassigns a score to each vertex.

    Since the vector which corresponds to a vertex of ontologygraph contains all the information of the vertex concept,attribute and the neighborhood structure in the ontologygraph, it’s always with high dimension. For instance, in thebiological ontology, a vector may contain the information ofall genes. In addition, ontology graph with a large numberof vertices makes ontology structure very complicated, andthe most typical example is the GIS (Geographic InformationSystem) ontology. These factors may lead to the fact that thesimilarity calculation of ontology application will be verylarge. However, in fact, the similarity between the verticesis determined by a small part of the vector components.

    For example, in the application of biological ontology, agenetic disease often results from a small number of genes,leaving most of the other genes irrelevant. Furthermore, inthe application of geographic information system ontology, ifan accident happens in a place and causes casualties, then weneed to find the nearest hospital ignoring schools and shopsnearby, i.e., we just need to find neighborhood informationthat meets specific requirements on the ontology graph.Therefore, tremendous academic and industrial interest isattracted to researching into the sparse ontology algorithm.

    In practical application, ontology function can be ex-pressed by

    fβ(v) =p∑

    i=1

    viβi. (1)

    Here β = (β1, · · · , βp) is an ontology sparse vector whichis used to shrink irrelevant component to zero. To determinethe ontology functionf , we should learn the sparse vectorβfirst. One popular ontology learning model with the balanceterm g(β) of the unknown sparse vectorβ ∈ Rp:

    minβ∈Rp

    Y (β) = l(β) + g(β), (2)

    where l(β) is a smooth and convex ontology loss functionand g(β) is a balance term which controls the sparsity ofontology sparse vectorβ. For example, the balance termusually takes the form ofg(β) = λ‖β‖1.

    Fixed β ∈ Rp and J ⊆ {1, · · · , p} with cardinality|J |, βJ denotes the vector inR|J| of elements ofβ areindexed by the element of subsetJ . For M ∈ Rp×m,MIJ ∈ R|I|×|J| denotes the sub matrix ofM restrictedto the columns indexed byJ and the rows indexed byI.For arbitrarily finite setA with cardinality |A|, the |A|-tuple (ya)a∈A ∈ Rp×|A| is the collection ofp-dimensionalvectors ya marked by the elements ofA. Let Y be thecollection of responses (for instance,Y = R), and wediscuss in this paper the ontology problem of predicting arandom variableY ∈ Y. The sample set here is denotedas n observations(vi, yi) ∈ Rp × Y, i = 1, · · · , n. Theempirical risk of sparse ontology vectorβ ∈ Rp is denotedby l(β) = 1n

    ∑ni=1 l(yi, β

    T vi), where l : Y × R → R+is a convex and continuously differentiable ontology lossfunction.

    Let C be a sub-collection of the index set of{1, · · · , p}satisfies∪C∈CC = {1, · · · , p}. We emphasize here thatCmay not be a partition of{1, · · · , p}, and it is possible forelements ofC to overlap. Let(dC)C∈C be a |C|-tuple of p-dimensional vectors withdCj > 0 if j ∈ C and dCj = 0otherwise. Hence, the normΩ for balance part is introducedas

    Ω(β) =∑

    C∈C(∑

    j∈C(dCj )

    2|βj |2) 12 =∑

    C∈C‖dC · β‖2. (3)

    The samevariableβj contained in two distrinct index setsC1, C2 ∈ C is allowed to be weighted differently inC1 andC2 (denoted byd

    C1j andd

    C2j respectively).

    We consider the following ontology sparse problem:

    minβ∈Rp

    1n

    n∑

    i=1

    l(yi, βT vi) + µΩ(β) (4)

    whereµ ≥ 0 is an ontology balance parameter. Letβ̂ be thesolution of ontology problem (4) in what follows.

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • II I. M AIN ONTOLOGY LEARNING ALGORITHM

    A. Backward and Forward Procedure

    In this part, we discuss the connection between the nonzeropatterns by which the estimated sparse ontology vectorβ̂ issatisfied and the normΩ which is defined by (3). First, wedescribe the collection of nonzero patterns, then we manifestthe go back and forth from index sets to patterns in terms offorward and backward procedure.

    The balance termΩ(β̂) =∑

    C∈C ‖dC · β̂‖2 is a mixed(l1, l2)-norm. From the index set viewpoint, it operates likean l1-norm and thusΩ leads to index set sparsity. Fromthis point of view, eachdC · β̂, and equivalently eachβCis supported to be 0. Moreover, thel2-norm can’t implyadvance sparsity within the index setsC ∈ C. It seems thatfor a fixed sub collection of index setC′ ⊆ C, the vectorsβCconnects the index setsC ∈ C′ is just equal to 0, and causesa collection of zeros which is the union of these index sets∪C∈C′C. Thus, the collection of permitted zero patterns isthe union closure ofC, i.e.,

    Z = { supC∈C′

    C; C′ ⊆ C}.

    Substitute for considering the collection of zero patternsZ,it is commodious to deal with nonzero patterns, and set

    P = {∩C∈C′Cc; C′ ⊆ C} = {Zc : Z ∈ Z}.It is equivalent to employP or Z to take the complementof each number of these collections.

    Suppose thatl : (y, y′) → l(y, y′) is nonnegative andsatisfies that for each pair of(y, y′) ∈ R × R, we deduce∂2l∂y2 > 0 and

    ∂2l∂y∂y′ (y, y

    ′) 6= 0. The Gram matrix of ontologydata is denoted asQ = 1n

    ∑ni=1 viv

    Ti . It is verified that ifQ

    is invertible or{1, · · · , p} ∈ G for the ontology optimizationproblem in (4) withµ > 0, then this problem has a uniquesolution.

    For the zero patterns of the solution of the ontologyproblem in (4): we suppose thatY = (y1, · · · , yn)T isa realization of an absolutely continuous probability distri-bution. The maximal number of linearly independent rowsin the matrix (v1, · · · , vn) ∈ Rp×n is denoted byk. Forµ > 0, any solution of the ontology problem in (4) withat most k − 1 nonzero coefficients has a zero pattern inZ = {∪C∈CC; C′ ⊆ C} almost surely. That is to say,if Y = (y1, · · · , yn)T is a realization of an absolutelycontinuous probability distribution, then the ontology sparsesolutions have a zero pattern inZ = {∪C∈C′C : C′ ⊆ C}.Therefore, the ontology problem in (4) has a unique solutionif the Gram matrixQ can be invertible, and its zero patternis geared toZ.

    Following are the four examples on norms associated withour pattern.Example 1. l2-norm:C is consisted of only one element-theentire collection{1, · · · , p}, and the collection of permittednonzero patterns is consisted of∅ and the entire collection{1, · · · , p}.Example 2.l1-norm:C is the collection of all independent el-ements thusP becomes the set of all possible sub-collections.Example 3. l2 − l1 mixed norm:C is the collection of allindependent elements and the entire collection{1, · · · , p},andP becomes the collection of all possible sub-collections.Example 4.Group version ofl1-norm:C is consisted of any

    dividing of {1, · · · , p}, and thus we haveP = Z is thecollection of all possible unions of the elements.

    In what follows, we focus on the following two problems:(3) proceed from the index setsC, if there is an availablemethod to generate the collection of nonzero patternsP; (2)on the contrary, fixedP, how can the index setsC andΩ(β)be schemed?

    We study the characteristics of the collection of indexsetsC and its corresponding collections of patternsP andZ. The collection of zero patternsZ (homologous, thecollection of nonzero patternsP) is closed under union(homologous, intersection), i.e., for anyK ∈ N and arbitraryz1, · · · , zK ∈ Z, ∪Kk=1zk ∈ Z (homologous,p1, · · · , pK ∈P, ∩Kk=1pk ∈ P). This reveals that we should suppose it isclosed under intersection if reverse engineering the collectionof nonzero patterns. Or else, the best we can do is to handleits intersection closure.

    Given a collection of index setsC, we can define for anysub-collectionI ⊆ {1, · · · , p} the C-adapted hull, or simplyhull, as:

    H(I) = {∪C∈C,C∩I=∅C}c

    which is the smallest collection inP including I; we inferI ⊆ H(I) with equality iff I ∈ P. Obviously, the hullhas a vivid geometrical explanation for special collectionsC of groups. For example, the hull of a sub collectionI ⊂ {1, · · · , p} is simply the axis aligned bounding box ofI if the collectionC is obtained by all horizontal and verticalhalf spaces when the variables are organized in a grid withtwo dimensional. Analogously, the hull is just the regularconvex hull if C is the collection of all half spaces implicitto all potential orientations.

    In mathematics and computer science, a directed acyclicgraph (for short, DAG) is a directed graph without directedcycles, i.e., it is yielded by a collection of vertices anddirected edges, each edge connecting one vertex to another,so that there is no way to start at some vertexv and followa sequence of edges that eventually loop back tov again.More details for directed acyclic graph can refer to Torreset al., [18] and [19], Marenco et al., [20], Pensar et al., [21]and Kamiyama [22].

    Suppose that some priori knowledge about the ontologysparsity structure of a solution̂β of our ontology problem in(4) is imposed. The knowledge can be utilized by restrictingthe patterns obtained via theΩ norm. Specifically, in termsof an intersection closed collection of zero patternsZ, wecan constrict back a minimal set of groupsC via itera-tively pruning away in the directed acyclic ontology graphcorresponding toZ, and all collections are unions of theirparents in ontology graph. Algorithm 1 presents the classicalbackward and forward procedure, which can be found inmany literatures (for example, see Trivisonno et al., [23] andMalvestuto [24]).

    Algorithm 1. Backward and Forward ProcedurePart 1. Backward procedureInput: Intersection closed family of nonzero patternsP.Output: Set of index setsC.Initialization: DetermineZ = {P c;P ∈ P} and setC = Z.Constructers the Hasse diagram for the poset(Z,⊃).for t = minC∈Z |C| to maxC∈Z |C| do

    for each vertexC ∈ Z such that|C| = t do

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • if (∪C∈Children(G)C = G) thenif (Parents(C) 6= ∅) then connect children ofC to

    parents ofC.end ifDeleteC from C.

    end ifend for

    end forPart 2. Forward procedureInput: Collection of index setsC = {C1, · · · , CM}.Output: Collection of zero patternsZ and nonzero patternsP.Initialization: Z = {∅}.for m = 1 to M do

    T = {∅}for each Z ∈ Z do if (Cm ⊆ Z) and (∀C ∈

    {C1, · · · , Cm−1}, C ⊆ Z ∪ Cm) → C ⊆ Z) thenT ← T ∪ {Z ∪ Cm}.

    end ifend for

    Z ← Z ∪ T .end forP = {Zc;Z ∈ Z}.

    The complexity of backward procedure isO(p|Z|2) andthe complexity of forward procedure isO(p|Z||C|2).

    We emphasize here that the collectionZ or P will not bechanged any more after removing a special index set fromC. This fact is the main lowdown hiding in the first part ofAlgorithm 1.

    B. Active Ontology Algorithm

    In order to moderate the values ofp, we deduce a solutionfor ontology problem (4) by virtue of generic toolboxes forsecond order cone programming (SOCP) (see Shi et al.,[25], Dalalyan [26], Jiang [27], Frangioni and Gentile [28],and Srirangarajan [29] for more details) wtih complexityO(p3.5 + |G|3.5), which is not appropriate ifp or |C| arelarge.

    We manifest in this part an active index set algorithm(Algorithm 2) that searches a solution for ontology problem(4) via considering increasingly larger active collections andverifying global optimality for every step.

    We consider the following ontology problem forλ > 0:

    minβ∈Rp

    1n

    n∑

    i=1

    l(yi, βT vi) +λ

    2[Ω(β)]2. (5)

    In active index set technologies, we build incrementallythe set of nonzero variables and useJ to express it, andthe ontology problem is solved only for this collection ofvariables, adding the constraintβJc = 0 to ontology problem(5). Let l(β) = 1n

    ∑ni=1 l(yi, β

    T vi) bethe empirical ontologyrisk (which is supposed to be convex and continuouslydifferentiable) and letl∗ be its Fenchel conjugate denotedby

    l∗(u) = supβ∈Rp

    {βT u− L(β)}.

    We uselJ(βJ) = l(β̂) to denote the restriction ofl toR|J| forβJ = βJ and β̂Jc = 0 with Fenchel conjugatel∗J . However,in general, we do not have the property thatl∗J(κJ) = l

    ∗(κ̃)for κ̃J = κJ and κ̃Jc = 0.

    For a potential active index setJ ⊆ {1, · · · , p} whichbelongs to the collection of allowed nonzero patternsP,we useCJ to denote the set of active index sets, i.e., thecollection of index setC ∈ C satisfiesC ∩ J 6= ∅. Thebalance partΩJ on R|J| is defined by

    ΩJ(βJ) =∑

    C∈C‖dCJ · βJ‖2 =

    C∈CJ‖dCJ · βJ‖2,

    and its dual normΩ∗J(κJ) = maxΩJ (βJ )≤1 βTJ κJ also

    introduced onR|J|.Let J ⊆ {1, · · · , p}. The following two ontology problems

    minβJ∈R|J|

    lJ(βJ) +λ

    2|ΩJ(βJ)|2, (6)

    maxβJ∈R|J|

    −l∗J(−κJ)−12λ|Ω∗J(βJ)|2, (7)

    are dualto each other and strong duality estabilishes. Thepair of primal dual variables{βJ , κJ} is optimal if and onlyif we obtainκJ = −5 lJ(βJ) andβTJ κJ = 1λ |Ω∗J(κJ)|2 =λ|ΩJ(βJ)|2. This fact shows the optimization ontology prob-lem is dual to the reduced ontology problem.

    It enables us to deduce the duality gap for the optimizationontology problem (6) which is reduced to the active index setof variablesJ . In reality, such duality gap can be vanish ifwe successively solve ontology problem (6) for increasinglylarger active index setsJ . Starting from the optimality ofthe ontology problem in (6), we study how we can governthe optimality or equal the duality gap for the full ontologyproblem in (5). The duality gap of the optimization ontologyproblem in (6) can be precisely expressed by a sum of twononnegative parts:

    lJ(βJ) + l∗J(−κJ) +λ

    2[ΩJ(βJ)]2 +

    12λ

    [Ω∗J(κJ)]2

    = {lJ(βJ) + l∗J(−κJ) + βTJ κJ}+{λ

    2[ΩJ(βJ)]2 +

    12λ

    [Ω∗J(κJ)]2 − βTJ κJ}.

    This dualitygap can be regarded as the sum of two dualitygaps, corresponding tolJ andΩJ , respectively. Hence, if weget a primal candidateβJ and selectκJ = −5 lJ(βJ), theduality gap relative tolJ disappears and the total duality gapreduces to

    λ

    2[ΩJ(βJ)]2 +

    12λ

    [Ω∗J(κJ)]2 − βTJ κJ .

    For verifying the reduced solutionβJ is optimal for thefull ontology problem in (5). PaddingβJ with zeros onJc

    to determineβ and calculateκ = −5 l(β) with κJ = −5J(βJ). For fixed candidate pair of primal and dual variables{β, κ}, we yield a duality gap for the full ontology problemin (5) equal to

    λ

    2[ΩJ(βJ)]2 +

    12λ

    [Ω∗J(κJ)]2 − βTJ κJ

    =12λ

    ([Ω∗(κ)]2 − λβTJ κJ).We can explain the active index set algorithm as a walk

    through the directed acyclic graph of nonzero patterns per-mitted by the normΩ. The parentsΠP(J) of J in directedacyclic ontology graph are exactly the patterns containingthe variables that may enter the active index set at the nextiteration of Algorithm 2. The index sets that are exactly

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • at the boundaries of the active collection areFJ = {C ∈(CJ)c;C ′ ∈ (CJ)c, C ⊆ C ′}, i.e., the index sets that are notcontained by any other inactive index sets. In addition, theactive index set may usually be increased only to guaranteethe obtained solution which is optimal in Algorithm 2.

    Algorithm 2. Active index set algorithmInput: Data{(vi, yi), i = 1, · · · , n}, balance parameterλ,maximum number of variabless and duality gap precisionε.Initialization: J = {∅}, β̂ = 0.While (maxK∈ΠP(J)

    ‖5l(β)K−J‖2∑H∈CK−CJ ‖d

    HK−J‖∞

    ≤ {−λβT 5l(β)} 12 (hereβ is theoptimal for the full ontology problem(5) is not satisfied) and(|J | ≤ s) do

    Replace J via infringing K ∈ ΠP(J) inmaxK∈ΠP(J)

    ‖5l(β)K−J‖2∑H∈CK−CJ ‖d

    HK−J‖∞

    ≤ {−λβT 5 l(β)} 12(hereβ is theoptimal for the full ontology problem (5)).

    Solve the reduced problemminβJ∈R|J| lJ(βJ) +λ2 [ΩJ(βJ)]

    2 to get β̂.End whileWhile maxC∈FJ{

    ∑k∈C{ 5l(β)k∑

    k∈H,H∈(CJ )c dHk

    }2} 12 ≤{λ(2ε − βT 5 l(β))} 12 is not satisfied and|J | ≤ sdo

    ReviseJ in terms of the following procedure:(begin procedure) LetC ∈ FJ be the index set that in-

    fringesmaxC∈CJ{∑

    k∈C{ 5l(β)k∑k∈H,H∈(CJ )c d

    Hk

    }2} 12 ≤ {λ(2ε −βT 5 l(β))} 12 most.

    if (C ∩ (∪K∈ΠP(J)K) 6= ∅) thenfor K ∈ ΠP(J) such thatK ∩ C 6= ∅ do

    J ← J ∩K.end for

    elsefor H ∈ FJ such thatH ∩ C 6= ∅ do

    for K ∈ ΠP(J) such thatK ∩ C 6= ∅ doJ ← J ∩K.

    end forend for

    end if (end procedure)Solve the ontology problemminβJ∈R|J| lJ(βJ) +

    λ2 [ΩJ(βJ)]

    2 to get β̂.End whileOutput: active index setJ , loading vectorβ̂.

    If the number of active variables is upper boundedby s ¿ p, the time complexity of Algorithm 2 isthe sum of: 1) the calculation of the gradient,O(snp)for the square loss; 2) if the underlying solver calledupon by the active index set algorithm is a standardSOCP solver,O(smaxJ∈P,|J|≤s |CJ |3.5 + s4.5); 3) t1 timesthe computation ofmaxK∈ΠP(J)

    ‖5l(β)K−J‖2∑H∈CK−CJ ‖d

    HK−J‖∞

    ≤{−λβT 5 l(β)} 12 , that is O(t1(s2|Θ| + p|G| + sn2θ) +p|G|) = O(t1p|G|); 4) t2 times the computation ofmaxC∈CJ{

    ∑k∈C{ 5l(β)k∑

    k∈H,H∈(CJ )c dHk

    }2} 12 ≤ {λ(2ε − βT 5l(β))} 12 , that is O(t2(s2|Θ| + p|C|+ |Θ|2 + |Θ|p + p|C|)) =O(t2p|C|), with t1 + t2 ≤ s.

    We finally obtain complexity with a leading term inO(sp|C|+ smaxJ∈P,|J|≤s |CJ |3.5 + s4.5).

    Furthermore, after careful observation, we found that sev-eral index sets can be used several times in the implement.This is a phenomenon of overlapping. In reality, in our

    ontology setting, these overlaps can be controlled by meansof selecting the weights(dC)C∈C which have been taken intoaccount so that several elements in overlapping index sets arepunished many times.

    IV. EXPERIMENTS

    In this section, five simulation experiments concerningontology measure and ontology mapping are designed re-spectively. In these five experiments, we mainly test theeffectiveness of Algorithm 2. After the sparse vectorβis obtained, and the ontology functionf is then deducedvia (1). In our experiment, the ontology loss function isselected as the square loss. To make comparisons as exactas possible, the Algorithm 2 was ran in C++, by means ofavailable LAPACK and BLAS libraries for linear algebra andoperation computations. The following five experiments areimplemented on a double-core CPU with a memory of 8GB.

    A. Ontology similarity measure experiment on plant data

    We useO1, a plant “PO” ontology in the first experiment,and it was constructed in www.plantontology.org. The struc-ture of O1 presented in Fig. 1.P@N (Precision Ratio seeCraswell and Hawking [30]) is used to measure the qualityof the experiment data.

    At first, experts give the closestN concepts for everyvertex on the ontology graph in plant field. Then the firstN concepts for every vertex on ontology graph are gainedby the algorithm 2, and the precision ratio can be computed.Or rather, for vertexv and the given integerN > 0. LetSimN,expertv be the set of vertices determined by expertsin which N vertices having the most similarity ofv areincluded. Let

    v1v = argminv′∈V (G)−v

    {|f(v)− f(v′)|},

    v2v = argminv′∈V (G)−{v,v1v}

    {|f(v)− f(v′)|},

    · · ·vNv = argmin

    v′∈V (G)−{v,v1v,··· ,vN−1v }{|f(v)− f(v′)|},

    andSimN,algorithmv = {v1v , v2v , · · · , vNv }.

    Then the precision ratio for vertexv is denoted by

    PreNv =|SimN,algorithmv ∩ SimN,expertv |

    N.

    The P@N average precision ratio for ontology graphG isthen stated as

    PreNG =

    ∑v∈V (G) Pre

    Nv

    |V (G)| .

    Meanwhile, ontologymethods in [15], [16] and [17] areapplied to the “PO” ontology. Then after getting the averageprecision ratio by means of these three algorithms, wecompare the results with algorithm 2. Parts of the data canbe referred to Table 1.

    When N = 3, 5 or 10, compared with the precisionratio determined by algorithms proposed in [15], [16] and[17], the precision ratio gained from our algorithms are a

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • Fig. 1. The Structure of “PO” Ontology.

    TABLE ITAB . 1.THE EXPERIMENT RESULTS OFONTOLOGY SIMILARITY MEASURE

    P@3 average P@5 average P@10 average

    precision ratio precision ratio precision ratio

    Our Algorithm 0.5292 0.6388 0.8275

    Algorithm in [15] 0.4549 0.5117 0.5859

    Algorithm in [16] 0.4282 0.4849 0.5632

    Algorithm in [17] 0.4831 0.5635 0.6871

    little bit higher. Furthermore, the precision ratios show thetendency to increase apparently asN increases. As a result,our algorithms turn out to be better and more effective thanthose raised by [15], [16] and [17].

    B. Ontology mapping experiment on humanoid robotics data

    We use “humanoid robotics” ontologiesO2 and O3 inthe second experiment. The structure ofO2 and O3 arerespectively presented in Fig. 2 and Fig. 3. The ontologyO2 presents the leg joint structure of bionic walking devicefor six-legged robot. And the ontologyO3 presents theexoskeleton frame of a robot with wearable and powerassisted lower extremities.

    We set the experiment with the aim to get ontology map-ping betweenO2 andO3. We also takeP@N Precision Ratioas a measure for the quality of experiment. After applyingontology algorithms in [31], [16] and [17] on “humanoidrobotics” ontology and getting the average precision ratio,we compare the precision ratios gained from these threemethods. Some results can refer to Table 2.

    WhenN = 1, 3 or 5, compared with the precision ratiosdetermined by algorithms proposed in [31], [16] and [17], theprecision ratios gained from our new ontology algorithm arehigher. Furthermore, the precision ratios show the tendencyto increase apparently asN increases. As a result, ouralgorithms turn out to be better and more effective than thoseraised by [31], [16] and [17].

    C. Ontology similarity measure experiment on biology data

    We use gene “GO” ontologyO4 in the third experiment,and it was constructed in the website http: //www. geneontol-ogy. The structure ofO4 is presented in Figure 4. Again, we

    Fig. 2. ‘Humanoid Robotics” OntologyO2.

    chosseP@N as a measure for the quality of the experimentdata. Then the ontology methods in [16], [17] and [32]are applied to the “GO” ontology. Then after getting theaverage precision ratio by means of these three algorithms,we compare the results with algorithm 2. Parts of the datacan refer to Table 3.

    WhenN = 3, 5 or 10, compared with the precision ratiosdetermined by algorithms proposed in [16], [17] and [32],the precision ratios gained from our ontology algorithms arehigher. Furthermore, the precision ratios show the tendencyto increase apparently asN increases. As a result, ouralgorithms turn out to be better and more effective than those

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • TABLE IITAB . 2. THE EXPERIMENT RESULTS OFONTOLOGY MAPPING

    P@1 average P@3 average P@5 average

    precision ratio precision ratio precision ratio

    Our Algorithm 0.2778 0.4815 0.6889

    Algorithm in [31] 0.2778 0.4815 0.5444

    Algorithm in [16] 0.2222 0.4074 0.4889

    Algorithm in [17] 0.2778 0.4630 0.5333

    TABLE IIITAB . 3. THE EXPERIMENT RESULTS OFONTOLOGY SIMILARITY MEASURE

    P@3 average P@5 average P@10 average P@20 average

    precision ratio precision ratio precision ratio precision ratio

    Our Algorithm 0.4963 0.6275 0.7418 0.8291

    Algorithm in [16] 0.4638 0.5348 0.6234 0.7459

    Algorithm in [17] 0.4356 0.4938 0.5647 0.7194

    Algorithm in [32] 0.4213 0.5183 0.6019 0.7239

    Fig. 3. “Humanoid Robotics” OntologyO3.

    Fig. 4. The Structure of “GO” Ontology.

    Fig. 5. “Physics Education” OntologyO5.

    raised by [16], [17] and [32].

    D. Ontology mapping experiment on physics education data

    We use “physics education” ontologiesO5 and O6 inthe fourth experiment. The structures ofO5 and O6 arerespectively represented in Fig. 5 and Fig. 6.

    We set the experiment with the aim to give ontologymapping betweenO5 andO6. We takeP@N precision ratioas a measure for the quality of the experiment. This time weapply ontology algorithms in [16], [17] and [33] on “physicseducation” ontology. Then we compare the precision ratiogotten from the three methods. Some results can refer toTable 4.

    When N = 1, 3 or 5, compared with the precision ratiodetermined by algorithms proposed in [16], [17] and [33],the precision ratio in terms of our new ontology mappingalgorithms are much higher. Furthermore, the precision ratiosshow the tendency to increase apparently asN increases.

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • TABLE IVTAB . 4. THE EXPERIMENT RESULTS OFONTOLOGY MAPPING

    P@1 average P@3 average P@5 average

    precision ratio precision ratio precision ratio

    Our Algorithm 0.6774 0.7634 0.9097

    Algorithm in [16] 0.6129 0.7312 0.7935

    Algorithm in [17] 0.6913 0.7556 0.8452

    Algorithm in [33] 0.6774 0.7742 0.8968

    Fig. 6. “Physics Education” OntologyO6.

    Fig. 7. “University” Ontology O7.

    As a result, our algorithms turn out to be better and moreeffective than those raised by [16], [17] and [33].

    E. Ontology mapping experiment on university data

    We use “University” ontologiesO7 and O8 in the lastexperiment. The structures ofO7 and O8 are respectivelypresented in Fig. 7 and Fig. 8.

    We set the experiment with the aim to give ontologymapping betweenO7 andO8. We takeP@N precision ratioas a criterion to measure the quality of the experiment. Thistime we apply ontology algorithms in [15], [16] and [17] on“University” ontology. Then we compare the precision ratios

    Fig. 8. “University” Ontology O8.

    gotten from the three methods. Some results can be referredto Table 5.

    WhenN = 1, 3 or 5, compared with the precision ratiosdetermined by algorithms proposed in [15], [16] and [17],the precision ratios in terms of our new ontology mappingalgorithms are much higher. Furthermore, the precision ratiosshow the tendency to increase apparently asN increases.As a result, our algorithms turn out to be better and moreeffective than those raised by [15], [16] and [17].

    V. CONCLUSIONS

    Ontology, as a model of big data structural represen-tation and storage, has been widely employed in variousdisciplines, and has been proved to have high efficiency.The nature of ontology application algorithms is deducingthe similarity measure function between vertices on specificontology graph. In recent years, all kinds of machine learningapproaches have been introduced for ontology similaritymeasure computation and ontology mapping construction.One effective ontology learning technology is mapping eachvertex to a real number using ontology functionf : V → R,and then the similarity betweenvi and vj is judged by|f(vi)−f(vj)|. Such learning method is suitable for ontologycomputation with big data and arouses great concern amongthe researchers.

    In this article, we focus on the feature extraction ofontology vector and report a new framework for ontologysparse vector learning algorithm in terms of SOCP. Finally,

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • TABLE VTAB . 5. THE EXPERIMENT RESULTS OFONTOLOGY MAPPING

    P@1 average P@3 average P@5 average

    precision ratio precision ratio precision ratio

    Our Algorithm 0.5714 0.6667 0.7143

    Algorithm in [15] 0.5000 0.5952 0.6857

    Algorithm in [16] 0.4286 0.5238 0.6071

    Algorithm in [17] 0.5714 0.6429 0.6500

    simulation datafrom four experiments reveal that our newalgorithm has high efficiency in biology, physics education,plant science, humanoid robotics and university applications.The new technology contributes to the state of art forontology application and illustrates the promising prospectsof application for multiple disciplines.

    REFERENCES

    [1] J. M. Przydzial, B. Bhhatarai, and A. Koleti, “GPCR Ontology: De-velopment and Application of AG Protein-Coupled Receptor Pharma-cology Knowledge Framework,”Bioinformatics, vol. 29, no. 24, pp.3211-3219, 2013.

    [2] S. Koehler, S. C. Doelken, and C. J. Mungall, “The Human PhenotypeOntology Project: Linking Molecular Biology and Disease ThroughPhenotype Data,”Nucleic Acids Research, vol. 42, no. D1, pp. 966-974, 2014.

    [3] M. Ivanovic and Z. Budimac, “An Overview of Ontologies and DataResources in Medical Domains,”Expert Systerms and Applications, vol.41, no. 11, pp. 5158-5166, 2014.

    [4] A. Hristoskova, V. Sakkalis, and G. Zacharioudakis, “Ontology-DrivenMonitoring of Patient’s Vital Signs Enabling Personalized MedicalDetection and Alert,”Sensors, vol. 14, no. 1, pp. 1598-1628, 2014.

    [5] M. A. Kabir, J. Han, and J. Yu, “User-Centric Social Context In-formation Management: An Ontology-Based Approach and Platform,”Personal and Ubiquitous Computing, vol. 18, no. 5, pp. 1061-1083,2014.

    [6] Y. L. Ma, L. Liu, K. Lu, B. H. Jin, and X. J. Liu, “A Graph DerivationBased Approach for Measuring and Comparing Structural Semantics ofOntologies,”IEEE Transactions on Knowledge and Data Engineering,vol. 26, no. 5, pp. 1039-1052, 2014.

    [7] Z. Li, H. S. Guo, Y. S. Yuan, and L. B. Sun, “Ontology Representationof Online Shopping Customers Knowledge in Enterprise Information,”Applied Mechanics and Materials, vol. 483, pp. 603-606, 2014.

    [8] R. Santodomingo, S. Rohjans, M. Uslar, J. A. Rodriguez-Mondejar, andM. A. Sanz-Bobi, “Ontology Matching System for Future Energy SmartGrids,” Engineering Applications of Artificial Intelligence, vol. 32, pp.242-257, 2014.

    [9] T. Pizzuti, G. Mirabelli, M. A. Sanz-Bobi, and F. Gomez-Gonzalez,“Food Track & Trace Ontology for Helping the Food TraceabilityControl,” Journal of Food Engineering, vol. 120, no. 1, pp. 17-30, 2014.

    [10] N. Lasierra, A. Alesanco, and J. Garcia, “Designing An Architecturefor Monitoring Patients at Home: Ontologies and Web Services forClinical and Technical Management Integration,”IEEE Journal ofBiomedical and Health Informatics, vol. 18, no. 3, pp. 896-906, 2014.

    [11] M. Tovar and D. Pinto, Azucena Montes, Gabriel Gonzalez, andDarnes Vilarino, “Identification of Ontological Relations in DomainCorpus Using Formal Concept Analysis,”Engineering Letters, vol. 23,no.2, pp. 72-76, 2015.

    [12] V. Gopal and N. S. Gowri Ganesh, “Ontology Based Search EngineEnhancer,”IAENG International Journal of Computer Science, vol. 35,no. 3, pp. 413-420, 2008.

    [13] W. Gao and L. Shi, “Szeged Related Indices of Unilateral PolyominoChain and Unilateral Hexagonal Chain,”IAENG International Journalof Applied Mathematics, vol. 45, no.2, pp. 138-150, 2015.

    [14] W. Gao, L. L. Zhu and Y. Guo, “Multi-dividing Infinite Push OntologyAlgorithm,” Engineering Letters, vol. 23, no. 3, pp. 132-139, 2015.

    [15] Y. Y. Wang, W. Gao, Y. G. Zhang and Y. Gao, “Ontology SimilarityComputation Use Ranking Learning Method,”The 3rd InternationalConference on Computational Intelligence and Industrial Application,Wuhan, China, 2010, pp. 20–22.

    [16] X. Huang, T. W. Xu, W. Gao and Z. Y. Jia, “Ontology Similarity Mea-sure and Ontology Mapping Via Fast Ranking Method,”InternationalJournal of Applied Physics and Mathematics, vol. 1, no. 1, pp. 54-59,2011.

    [17] W. Gao and L. Liang, “Ontology Similarity Measure by OptimizingNDCG Measure and Application in Physics Education,”Future Commu-nication, Computing, Control and Management, vol. 142, pp. 415-421,2011.

    [18] P. Torres, J. van Wingerden, M. Verhaegen, “PO-MOESP subspaceidentification of directed acyclic graphs with unknown topology,”Au-tomatica, vol. 53, pp. 60-71, 2015.

    [19] P. Torres, J. van Wingerden, M. Verhaegen, “Hierarchical subspaceidentification of directed acyclic graphs,”Internation Journal of Con-trol, vol. 88, no. 1, pp. 123-137, 2015.

    [20] J. Marenco, M. Mydlarz, D. Severin, “Topological additive numberingof directed acyclic graphs,”Information Processing Letters, vol. 115,no. 2, pp. 199-202, 2015.

    [21] J. Pensar, H. Nyman, T. Koski, J. Corander, “Labeled directed acyclicgraphs: a generalization of context-specific independence in directedgraphical models,”Data Mining and Knowledge Discovery, vol. 29,no. 2, pp. 503-533, 2015.

    [22] N. Kamiyama, “The nucleolus of arborescence games in directedacyclic graphs,”Operations Research Letters, vol. 43, no. 1, pp. 89-92, 2015.

    [23] R. Trivisonno, R. Guerzoni, I. Vaishnavi, D. Soldani, “SDN-based5G mobile networks: architecture, functions, procedures and backwardcompatibility,” Transactions on Emerging Telecommunications Tech-nologies, vol. 26, no. 1, pp. 82-92, 2015.

    [24] M. F. Malvestuto, “A backward selection procedure for approximatinga discrete probability distribution by decomposable models,”Kyber-netika, vol. 48, no. 5, pp. 825-844, 2012.

    [25] Q. J. Shi, W. Q. Xu, T. H. Chang, Y. C. Wang, E. B. Song,“Joint beamforming and power splitting for MISO interference channelwith SWIPT: an SOCP relaxation and decentralized algorithm,”IEEETransactions on Signal Processing, vol. 62, no. 23, pp. 6194-6208,2014.

    [26] A. S. Dalalyan, “SOCP based variance free dantzig selector withapplication to robust estimation,”Comptes Rendus Mathematique, vol.350, no. 15-16, pp. 785-788, 2012.

    [27] A. Jiang, H. K. Kwan, Y. P. Zhu, “Peak-error-constrained sparseFIR filter design using iterative SOCP,”IEEE Transactions on SignalProcessing, vol. 60, no. 8, pp. 4035-4044, 2012.

    [28] A. Frangioni, C. Gentile, “A computational comparison of reformula-tions of the perspective relaxation: SOCP vs. cutting planes,”OperationsResearch Letters, vol. 37, no. 3, pp. 206-210, 2009.

    [29] S. Srirangarajan, A. H. Tewfik, Z. Q. Luo, “Distributed SensorNetwork Localization Using SOCP Relaxation”,IEEE Transactions onWireless Communications, vol. 7, no. 12, pp. 4886-4895, 2008.

    [30] N. Craswell and D. Hawking, “Overview of the TREC 2003 WebTrack,” In proceedings of the Twelfth Text Retrieval Conference,Gaithersburg, Maryland, NIST Special Publication, 2003, pp. 78-92.

    [31] W. Gao and M. H. Lan, “Ontology Mapping Algorithm Based onRanking Learning Method,”Microelectronics and Computer, vol. 28,no. 9, pp. 59-61, 2011.

    [32] Y. Gao and W. Gao, “Ontology Similarity Measure and OntologyMapping Via Learning Optimization Similarity Function,”InternationalJournal of Machine Learning and Computing, vol. 2, no. 2, pp. 107-112,2012.

    [33] W. Gao, Y. Gao, and L. Liang, “Diffusion and Harmonic Analysis onHypergraph and Application in Ontology Similarity Measure and On-tology Mapping,”Journal of Chemical and Pharmaceutical Research,vol. 5, no. 9, pp. 592-598, 2013.

    Meihui Lan, female, was born in the city of Yiliang, Yunnan Province,China on Aug. 25, 1982. In 2006, she got bachelor degree in department ofcomputer science and technology, Yunnan normal university, China. Then,she was enrolled in department of computer software and theory in the same

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________

  • school, andgot Master degree and received a software designer certificatethere in 2009.

    Now, she acts as a lecturer in the department of computer scienceand engineering, Qujing Normal University. As a researcher in computerscience, her interests are Natural language processing and Machine learning.

    Jian Xu acts as a associate professor in the department of computerscience and engineering, Qujing Normal University. As a researcher incomputer science, his interests are Computer Education and InformationRetrieval.

    Wei Gao, male, was born in the city of Shaoxing, Zhejiang Province,China on Feb. 13, 1981. He got two bachelor degrees on computer sciencefrom Zhejiang industrial university in 2004 and mathematics educationfrom College of Zhejiang education in 2006. Then, he was enrolled indepartment of computer science and information technology, Yunnannormal university, and got Master degree there in 2009. In 2012, he gotPhD degree in department of Mathematics, Soochow University, China.

    Now, he acts as a associate professor in the department of information,Yunnan Normal University. As a researcher in computer science and math-ematics, his interests are covering two disciplines: Graph theory, Statisticallearning theory, Information retrieval, and Artificial Intelligence.

    IAENG International Journal of Computer Science, 43:1, IJCS_43_1_02

    (Advance online publication: 29 February 2016)

    ______________________________________________________________________________________


Recommended