Search (171 results, page 1 of 9)

  • × theme_ss:"Wissensrepräsentation"
  1. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.14
    0.14001429 = product of:
      0.28002858 = sum of:
        0.2547937 = weight(_text_:java in 1604) [ClassicSimilarity], result of:
          0.2547937 = score(doc=1604,freq=2.0), product of:
            0.4674661 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0663307 = queryNorm
            0.5450528 = fieldWeight in 1604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1604)
        0.025234876 = weight(_text_:und in 1604) [ClassicSimilarity], result of:
          0.025234876 = score(doc=1604,freq=2.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.17153187 = fieldWeight in 1604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1604)
      0.5 = coord(2/4)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Nix, M.: ¬Die praktische Einsetzbarkeit des CIDOC CRM in Informationssystemen im Bereich des Kulturerbes (2004) 0.11
    0.10902268 = product of:
      0.21804535 = sum of:
        0.18199553 = weight(_text_:java in 729) [ClassicSimilarity], result of:
          0.18199553 = score(doc=729,freq=2.0), product of:
            0.4674661 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0663307 = queryNorm
            0.38932347 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=729)
        0.036049824 = weight(_text_:und in 729) [ClassicSimilarity], result of:
          0.036049824 = score(doc=729,freq=8.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.24504554 = fieldWeight in 729, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0390625 = fieldNorm(doc=729)
      0.5 = coord(2/4)
    
    Abstract
    Es steht uns eine praktisch unbegrenzte Menge an Informationen über das World Wide Web zur Verfügung. Das Problem, das daraus erwächst, ist, diese Menge zu bewältigen und an die Information zu gelangen, die im Augenblick benötigt wird. Das überwältigende Angebot zwingt sowohl professionelle Anwender als auch Laien zu suchen, ungeachtet ihrer Ansprüche an die gewünschten Informationen. Um dieses Suchen effizienter zu gestalten, gibt es einerseits die Möglichkeit, leistungsstärkere Suchmaschinen zu entwickeln. Eine andere Möglichkeit ist, Daten besser zu strukturieren, um an die darin enthaltenen Informationen zu gelangen. Hoch strukturierte Daten sind maschinell verarbeitbar, sodass ein Teil der Sucharbeit automatisiert werden kann. Das Semantic Web ist die Vision eines weiterentwickelten World Wide Web, in dem derart strukturierten Daten von so genannten Softwareagenten verarbeitet werden. Die fortschreitende inhaltliche Strukturierung von Daten wird Semantisierung genannt. Im ersten Teil der Arbeit sollen einige wichtige Methoden der inhaltlichen Strukturierung von Daten skizziert werden, um die Stellung von Ontologien innerhalb der Semantisierung zu klären. Im dritten Kapitel wird der Aufbau und die Aufgabe des CIDOC Conceptual Reference Model (CRM), einer Domain Ontologie im Bereich des Kulturerbes dargestellt. Im darauf folgenden praktischen Teil werden verschiedene Ansätze zur Verwendung des CRM diskutiert und umgesetzt. Es wird ein Vorschlag zur Implementierung des Modells in XML erarbeitet. Das ist eine Möglichkeit, die dem Datentransport dient. Außerdem wird der Entwurf einer Klassenbibliothek in Java dargelegt, auf die die Verarbeitung und Nutzung des Modells innerhalb eines Informationssystems aufbauen kann.
  3. Botana Varela, J.: Unscharfe Wissensrepräsentationen bei der Implementation des Semantic Web (2004) 0.10
    0.099775426 = product of:
      0.19955085 = sum of:
        0.14559641 = weight(_text_:java in 346) [ClassicSimilarity], result of:
          0.14559641 = score(doc=346,freq=2.0), product of:
            0.4674661 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0663307 = queryNorm
            0.31145877 = fieldWeight in 346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=346)
        0.053954437 = weight(_text_:und in 346) [ClassicSimilarity], result of:
          0.053954437 = score(doc=346,freq=28.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.36675057 = fieldWeight in 346, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.03125 = fieldNorm(doc=346)
      0.5 = coord(2/4)
    
    Abstract
    In der vorliegenden Arbeit soll einen Ansatz zur Implementation einer Wissensrepräsentation mit den in Abschnitt 1.1. skizzierten Eigenschaften und dem Semantic Web als Anwendungsbereich vorgestellt werden. Die Arbeit ist im Wesentlichen in zwei Bereiche gegliedert: dem Untersuchungsbereich (Kapitel 2-5), in dem ich die in Abschnitt 1.1. eingeführte Terminologie definiert und ein umfassender Überblick über die zugrundeliegenden Konzepte gegeben werden soll, und dem Implementationsbereich (Kapitel 6), in dem aufbauend auf dem im Untersuchungsbereich erarbeiteten Wissen einen semantischen Suchdienst entwickeln werden soll. In Kapitel 2 soll zunächst das Konzept der semantischen Interpretation erläutert und in diesem Kontext hauptsächlich zwischen Daten, Information und Wissen unterschieden werden. In Kapitel 3 soll Wissensrepräsentation aus einer kognitiven Perspektive betrachtet und in diesem Zusammenhang das Konzept der Unschärfe beschrieben werden. In Kapitel 4 sollen sowohl aus historischer als auch aktueller Sicht die Ansätze zur Wissensrepräsentation und -auffindung beschrieben und in diesem Zusammenhang das Konzept der Unschärfe diskutiert werden. In Kapitel 5 sollen die aktuell im WWW eingesetzten Modelle und deren Einschränkungen erläutert werden. Anschließend sollen im Kontext der Entscheidungsfindung die Anforderungen beschrieben werden, die das WWW an eine adäquate Wissensrepräsentation stellt, und anhand der Technologien des Semantic Web die Repräsentationsparadigmen erläutert werden, die diese Anforderungen erfüllen. Schließlich soll das Topic Map-Paradigma erläutert werden. In Kapitel 6 soll aufbauend auf die im Untersuchtungsbereich gewonnenen Erkenntnisse ein Prototyp entwickelt werden. Dieser besteht im Wesentlichen aus Softwarewerkzeugen, die das automatisierte und computergestützte Extrahieren von Informationen, das unscharfe Modellieren, sowie das Auffinden von Wissen unterstützen. Die Implementation der Werkzeuge erfolgt in der Programmiersprache Java, und zur unscharfen Wissensrepräsentation werden Topic Maps eingesetzt. Die Implementation wird dabei schrittweise vorgestellt. Schließlich soll der Prototyp evaluiert und ein Ausblick auf zukünftige Erweiterungsmöglichkeiten gegeben werden. Und schließlich soll in Kapitel 7 eine Synthese formuliert werden.
  4. Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017) 0.05
    0.054598656 = product of:
      0.21839462 = sum of:
        0.21839462 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
          0.21839462 = score(doc=4615,freq=2.0), product of:
            0.4674661 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0663307 = queryNorm
            0.46718815 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4615)
      0.25 = coord(1/4)
    
    Abstract
    Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
  5. Andreas, H.: On frames and theory-elements of structuralism (2014) 0.05
    0.046346277 = product of:
      0.18538511 = sum of:
        0.18538511 = weight(_text_:having in 4402) [ClassicSimilarity], result of:
          0.18538511 = score(doc=4402,freq=4.0), product of:
            0.39673427 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0663307 = queryNorm
            0.4672778 = fieldWeight in 4402, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4402)
      0.25 = coord(1/4)
    
    Abstract
    There are quite a few success stories illustrating philosophy's relevance to information science. One can cite, for example, Leibniz's work on a characteristica universalis and a corresponding calculus ratiocinator through which he aspired to reduce reasoning to calculating. It goes without saying that formal logic initiated research on decidability and computational complexity. But even beyond the realm of formal logic, philosophy has served as a source of inspiration for developments in information and computer science. At the end of the twentieth century, formal ontology emerged from a quest for a semantic foundation of information systems having a higher reusability than systems being available at the time. A success story that is less well documented is the advent of frame systems in computer science. Minsky is credited with having laid out the foundational ideas of such systems. There, the logic programming approach to knowledge representation is criticized by arguing that one should be more careful about the way human beings recognize objects and situations. Notably, the paper draws heavily on the writings of Kuhn and the Gestalt-theorists. It is not our intent, however, to document the traces of the frame idea in the works of philosophers. What follows is, rather, an exposition of a methodology for representing scientific knowledge that is essentially frame-like. This methodology is labelled as structuralist theory of science or, in short, as structuralism. The frame-like character of its basic meta-theoretical concepts makes structuralism likely to be useful in knowledge representation.
  6. Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016) 0.05
    0.04549888 = product of:
      0.18199553 = sum of:
        0.18199553 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
          0.18199553 = score(doc=4179,freq=2.0), product of:
            0.4674661 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0663307 = queryNorm
            0.38932347 = fieldWeight in 4179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4179)
      0.25 = coord(1/4)
    
    Abstract
    In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
  7. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.04
    0.044733595 = product of:
      0.08946719 = sum of:
        0.010814947 = weight(_text_:und in 2169) [ClassicSimilarity], result of:
          0.010814947 = score(doc=2169,freq=2.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.07351366 = fieldWeight in 2169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2169)
        0.07865224 = weight(_text_:having in 2169) [ClassicSimilarity], result of:
          0.07865224 = score(doc=2169,freq=2.0), product of:
            0.39673427 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0663307 = queryNorm
            0.19824918 = fieldWeight in 2169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2169)
      0.5 = coord(2/4)
    
    Abstract
    T.1: Today's thesauri are mostly electronic tools, having moved on from the paper-based era when thesaurus standards were first developed. They are built and maintained with the support of software and need to integrate with other software, such as search engines and content management systems. Whereas in the past thesauri were designed for information professionals trained in indexing and searching, today there is a demand for vocabularies that untrained users will find to be intuitive. ISO 25964 makes the transition needed for the world of electronic information management. However, part 1 retains the assumption that human intellect is usually involved in the selection of indexing terms and in the selection of search terms. If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design, even though a thesaurus built for human users may also be applied in situations where computers make the choices. Efficient exchange of data is a vital component of thesaurus management and exploitation. Hence the inclusion in this standard of recommendations for exchange formats and protocols. Adoption of these will facilitate interoperability between thesaurus management systems and the other computer applications, such as indexing and retrieval systems, that will utilize the data. Thesauri are typically used in post-coordinate retrieval systems, but may also be applied to hierarchical directories, pre-coordinate indexes and classification systems. Increasingly, thesaurus applications need to mesh with others, such as automatic categorization schemes, free-text search systems, etc. Part 2 of ISO 25964 describes additional types of structured vocabulary and gives recommendations to enable interoperation of the vocabularies at all stages of the information storage and retrieval process.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.04
    0.036399104 = product of:
      0.14559641 = sum of:
        0.14559641 = weight(_text_:java in 935) [ClassicSimilarity], result of:
          0.14559641 = score(doc=935,freq=2.0), product of:
            0.4674661 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0663307 = queryNorm
            0.31145877 = fieldWeight in 935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=935)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
  9. Koenderink, N.J.J.P.; Assem, M. van; Hulzebos, J.L.; Broekstra, J.; Top, J.L.: ROC: a method for proto-ontology construction by domain experts (2008) 0.03
    0.03277177 = product of:
      0.13108708 = sum of:
        0.13108708 = weight(_text_:having in 647) [ClassicSimilarity], result of:
          0.13108708 = score(doc=647,freq=2.0), product of:
            0.39673427 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0663307 = queryNorm
            0.3304153 = fieldWeight in 647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=647)
      0.25 = coord(1/4)
    
    Abstract
    Ontology construction is a labour-intensive and costly process. Even though many formal and semi-formal vocabularies are available, creating an ontology for a specific application is hindered in a number of ways. Firstly, the process of elicitating concepts is a time consuming and strenuous process. Secondly, it is difficult to keep focus. Thirdly, technical modelling constructs are hard to understand for the uninitiated. We propose ROC as a method to cope with these problems. ROC builds on well-known approaches for ontology construction. However, we reuse existing sources to generate a repository of proposed associations. ROC assists in efficiently putting forward all relevant concepts and relations by providing a large set of potential candidate associations. Secondly, rather than using intermediate representations of formal constructs we confront the domain expert with 'natural-language-like' statements generated from RDF-based triples. Moreover, we strictly separate the roles of problem owner, domain expert and knowledge engineer, each having his own responsibilities and skills. The domain expert and problem owner keep focus by monitoring a well-defined application purpose. We have implemented an initial set of tools to support ROC. This paper describes the ROC method and two application cases in which we evaluate the overall approach.
  10. Marcondes, C.H.; Costa, L.C da.: ¬A model to represent and process scientific knowledge in biomedical articles with semantic Web technologies (2016) 0.03
    0.03277177 = product of:
      0.13108708 = sum of:
        0.13108708 = weight(_text_:having in 3829) [ClassicSimilarity], result of:
          0.13108708 = score(doc=3829,freq=2.0), product of:
            0.39673427 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0663307 = queryNorm
            0.3304153 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3829)
      0.25 = coord(1/4)
    
    Abstract
    Knowledge organization faces the challenge of managing the amount of knowledge available on the Web. Published literature in biomedical sciences is a huge source of knowledge, which can only efficiently be managed through automatic methods. The conventional channel for reporting scientific results is Web electronic publishing. Despite its advances, scientific articles are still published in print formats such as portable document format (PDF). Semantic Web and Linked Data technologies provides new opportunities for communicating, sharing, and integrating scientific knowledge that can overcome the limitations of the current print format. Here is proposed a semantic model of scholarly electronic articles in biomedical sciences that can overcome the limitations of traditional flat records formats. Scientific knowledge consists of claims made throughout article texts, especially when semantic elements such as questions, hypotheses and conclusions are stated. These elements, although having different roles, express relationships between phenomena. Once such knowledge units are extracted and represented with technologies such as RDF (Resource Description Framework) and linked data, they may be integrated in reasoning chains. Thereby, the results of scientific research can be published and shared in structured formats, enabling crawling by software agents, semantic retrieval, knowledge reuse, validation of scientific results, and identification of traces of scientific discoveries.
  11. Zhitomirsky-Geffet, M.; Erez, E.S.; Bar-Ilan, J.: Toward multiviewpoint ontology construction by collaboration of non-experts and crowdsourcing : the case of the effect of diet on health (2017) 0.03
    0.03277177 = product of:
      0.13108708 = sum of:
        0.13108708 = weight(_text_:having in 4439) [ClassicSimilarity], result of:
          0.13108708 = score(doc=4439,freq=2.0), product of:
            0.39673427 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0663307 = queryNorm
            0.3304153 = fieldWeight in 4439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4439)
      0.25 = coord(1/4)
    
    Abstract
    Domain experts are skilled in buliding a narrow ontology that reflects their subfield of expertise based on their work experience and personal beliefs. We call this type of ontology a single-viewpoint ontology. There can be a variety of such single viewpoint ontologies that represent a wide spectrum of subfields and expert opinions on the domain. However, to have a complete formal vocabulary for the domain they need to be linked and unified into a multiviewpoint model while having the subjective viewpoint statements marked and distinguished from the objectively true statements. In this study, we propose and implement a two-phase methodology for multiviewpoint ontology construction by nonexpert users. The proposed methodology was implemented for the domain of the effect of diet on health. A large-scale crowdsourcing experiment was conducted with about 750 ontological statements to determine whether each of these statements is objectively true, viewpoint, or erroneous. Typically, in crowdsourcing experiments the workers are asked for their personal opinions on the given subject. However, in our case their ability to objectively assess others' opinions was examined as well. Our results show substantially higher accuracy in classification for the objective assessment approach compared to the results based on personal opinions.
  12. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.03
    0.027299328 = product of:
      0.10919731 = sum of:
        0.10919731 = weight(_text_:java in 378) [ClassicSimilarity], result of:
          0.10919731 = score(doc=378,freq=2.0), product of:
            0.4674661 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0663307 = queryNorm
            0.23359407 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=378)
      0.25 = coord(1/4)
    
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
  13. Ibekwe-SanJuan, F.: Constructing and maintaining knowledge organization tools : a symbolic approach (2006) 0.03
    0.026217414 = product of:
      0.104869656 = sum of:
        0.104869656 = weight(_text_:having in 595) [ClassicSimilarity], result of:
          0.104869656 = score(doc=595,freq=2.0), product of:
            0.39673427 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0663307 = queryNorm
            0.26433223 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.03125 = fieldNorm(doc=595)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - To propose a comprehensive and semi-automatic method for constructing or updating knowledge organization tools such as thesauri. Design/methodology/approach - The paper proposes a comprehensive methodology for thesaurus construction and maintenance combining shallow NLP with a clustering algorithm and an information visualization interface. The resulting system TermWatch, extracts terms from a text collection, mines semantic relations between them using complementary linguistic approaches and clusters terms using these semantic relations. The clusters are mapped onto a 2D using an integrated visualization tool. Findings - The clusters formed exhibit the different relations necessary to populate a thesaurus or ontology: synonymy, generic/specific and relatedness. The clusters represent, for a given term, its closest neighbours in terms of semantic relations. Practical implications - This could change the way in which information professionals (librarians and documentalists) undertake knowledge organization tasks. TermWatch can be useful either as a starting point for grasping the conceptual organization of knowledge in a huge text collection without having to read the texts, then actually serving as a suggestive tool for populating different hierarchies of a thesaurus or an ontology because its clusters are based on semantic relations. Originality/value - This lies in several points: combined use of linguistic relations with an adapted clustering algorithm, which is scalable and can handle sparse data. The paper proposes a comprehensive approach to semantic relations acquisition whereas existing studies often use one or two approaches. The domain knowledge maps produced by the system represents an added advantage over existing approaches to automatic thesaurus construction in that clusters are formed using semantic relations between domain terms. Thus while offering a meaningful synthesis of the information contained in the original corpus through clustering, the results can be used for knowledge organization tasks (thesaurus building and ontology population) The system also constitutes a platform for performing several knowledge-oriented tasks like science and technology watch, textmining, query refinement.
  14. Wildgen, W.: Semantischer Realismus und Antirealismus in der Sprachtheorie (1992) 0.02
    0.024182959 = product of:
      0.096731834 = sum of:
        0.096731834 = weight(_text_:und in 2139) [ClassicSimilarity], result of:
          0.096731834 = score(doc=2139,freq=10.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.6575262 = fieldWeight in 2139, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=2139)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  15. Sandkühler, H.J.: Epistemologischer Realismus und die Wirklichkeit des Wissens : eine Verteidigung der Philosophie des Geistes gegen Naturalismus und Reduktionismus (1992) 0.02
    0.02207592 = product of:
      0.08830368 = sum of:
        0.08830368 = weight(_text_:und in 1731) [ClassicSimilarity], result of:
          0.08830368 = score(doc=1731,freq=12.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.60023654 = fieldWeight in 1731, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.078125 = fieldNorm(doc=1731)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  16. Roth, G.; Schwegler, H.: Kognitive Referenz und Selbstreferentialität des Gehirns : ein Beitrag zur Klärung des Verhältnisses zwischen Erkenntnistheorie und Hirnforschung (1992) 0.02
    0.02207592 = product of:
      0.08830368 = sum of:
        0.08830368 = weight(_text_:und in 607) [ClassicSimilarity], result of:
          0.08830368 = score(doc=607,freq=12.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.60023654 = fieldWeight in 607, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.078125 = fieldNorm(doc=607)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  17. Kutschera, F. von: ¬Der erkenntnistheoretische Realismus (1992) 0.02
    0.021629894 = product of:
      0.08651958 = sum of:
        0.08651958 = weight(_text_:und in 608) [ClassicSimilarity], result of:
          0.08651958 = score(doc=608,freq=8.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.58810925 = fieldWeight in 608, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=608)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  18. Franzen, W.: Idealismus statt Realismus? : Realismus plus Skeptizismus! (1992) 0.02
    0.021629894 = product of:
      0.08651958 = sum of:
        0.08651958 = weight(_text_:und in 612) [ClassicSimilarity], result of:
          0.08651958 = score(doc=612,freq=8.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.58810925 = fieldWeight in 612, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=612)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  19. Baumer, C.; Reichenberger, K.: Business Semantics - Praxis und Perspektiven (2006) 0.02
    0.019075774 = product of:
      0.076303095 = sum of:
        0.076303095 = weight(_text_:und in 20) [ClassicSimilarity], result of:
          0.076303095 = score(doc=20,freq=14.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.51866364 = fieldWeight in 20, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=20)
      0.25 = coord(1/4)
    
    Abstract
    Der Artikel führt in semantische Technologien ein und gewährt Einblick in unterschiedliche Entwicklungsrichtungen. Insbesondere werden Business Semantics vorgestellt und vom Semantic Web abgegrenzt. Die Stärken von Business Semantics werden speziell an den Praxisbeispielen des Knowledge Portals und dem Projekt "Knowledge Base" der Wienerberger AG veranschaulicht. So werden die Anforderungen - was brauchen Anwendungen in Unternehmen heute - und die Leistungsfähigkeit von Systemen - was bieten Business Semantics - konkretisiert und gegenübergestellt.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.6/7, S.359-366
  20. Kunze, C.: Lexikalisch-semantische Wortnetze in Sprachwissenschaft und Sprachtechnologie (2006) 0.02
    0.019075774 = product of:
      0.076303095 = sum of:
        0.076303095 = weight(_text_:und in 23) [ClassicSimilarity], result of:
          0.076303095 = score(doc=23,freq=14.0), product of:
            0.1471148 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0663307 = queryNorm
            0.51866364 = fieldWeight in 23, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=23)
      0.25 = coord(1/4)
    
    Abstract
    Dieser Beitrag beschreibt die Strukturierungsprinzipien und Anwendungskontexte lexikalisch-semantischer Wortnetze, insbesondere des deutschen Wortnetzes GermaNet. Wortnetze sind zurzeit besonders populäre elektronische Lexikonressourcen, die große Abdeckungen semantisch strukturierter Datenfür verschiedene Sprachen und Sprachverbünde enthalten. In Wortnetzen sind die häufigsten und wichtigsten Konzepte einer Sprache mit ihren elementaren Bedeutungsrelationen repräsentiert. Zentrale Anwendungen für Wortnetze sind u.a. die Lesartendisambiguierung und die Informationserschließung. Der Artikel skizziert die neusten Szenarien, in denen GermaNet eingesetzt wird: die Semantische Informationserschließung und die Integration allgemeinsprachlicher Wortnetze mit terminologischen Ressourcen vordem Hintergrund der Datenkonvertierung in OWL.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.6/7, S.309-314

Years

Languages

  • d 119
  • e 47
  • pt 1
  • More… Less…

Types

  • a 103
  • el 39
  • x 21
  • m 18
  • r 7
  • s 4
  • n 2
  • p 1
  • More… Less…

Subjects

Classifications