Search (1888 results, page 1 of 95)

  • × year_i:[2010 TO 2020}
  1. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.13
    0.13323762 = product of:
      0.26647523 = sum of:
        0.24246171 = weight(_text_:java in 1604) [ClassicSimilarity], result of:
          0.24246171 = score(doc=1604,freq=2.0), product of:
            0.44484076 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0631203 = queryNorm
            0.5450528 = fieldWeight in 1604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1604)
        0.024013512 = weight(_text_:und in 1604) [ClassicSimilarity], result of:
          0.024013512 = score(doc=1604,freq=2.0), product of:
            0.13999446 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0631203 = queryNorm
            0.17153187 = fieldWeight in 1604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1604)
      0.5 = coord(2/4)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Groß, M.; Rusch, B.: Open Source Programm Mable+ zur Analyse von Katalogdaten veröffentlicht (2011) 0.12
    0.12173758 = product of:
      0.24347515 = sum of:
        0.20782433 = weight(_text_:java in 1181) [ClassicSimilarity], result of:
          0.20782433 = score(doc=1181,freq=2.0), product of:
            0.44484076 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0631203 = queryNorm
            0.46718815 = fieldWeight in 1181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=1181)
        0.035650823 = weight(_text_:und in 1181) [ClassicSimilarity], result of:
          0.035650823 = score(doc=1181,freq=6.0), product of:
            0.13999446 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0631203 = queryNorm
            0.25465882 = fieldWeight in 1181, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.046875 = fieldNorm(doc=1181)
      0.5 = coord(2/4)
    
    Abstract
    Als eines der Ergebnisse der 2007 zwischen BVB und KOBV geschlossenen strategischen Allianz konnte am 12. September 2011 Mable+, eine Java-gestützte OpenSource-Software zur automatischen Daten- und Fehleranalyse von Bibliothekskatalogen, veröffentlicht werden. Basierend auf dem MAB-Datenaustauschformat ermöglicht Mable+ die formale Prüfung von Katalogdaten verbunden mit einer statistischen Auswertung über die Verteilung der Felder. Dazu benötigt es einen MAB-Abzug des Katalogs im MAB2-Bandformat mit MAB2-Zeichensatz. Dieses Datenpaket wird innerhalb weniger Minuten analysiert. Als Ergebnis erhält man einen Report mit einer allgemeinen Statistik zu den geprüften Datensätzen (Verteilung der Satztypen, Anzahl der MAB-Felder, u.a.), sowie eine Liste gefundener Fehler. Die Software wurde bereits bei der Migration der Katalogdaten aller KOBV-Bibliotheken in den B3Kat erfolgreich eingesetzt. Auf der Projekt-Webseite http://mable.kobv.de/ findet man allgemeine Informationen sowie diverse Anleitungen zur Nutzung des Programms. Die Software kann man sich unter http://mable.kobv.de/download.html herunterladen. Derzeit wird ein weiterführendes Konzept zur Nutzung und Modifizierung der Software entwickelt.
  3. Gomes, P.; Guiomar da Cunha Frota, M.: Knowledge organization from a social perspective : thesauri and the commitment to cultural diversity (2019) 0.06
    0.062223416 = product of:
      0.24889366 = sum of:
        0.24889366 = weight(_text_:holding in 645) [ClassicSimilarity], result of:
          0.24889366 = score(doc=645,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.5112703 = fieldWeight in 645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.046875 = fieldNorm(doc=645)
      0.25 = coord(1/4)
    
    Abstract
    Knowledge organization systems can have linguistic and conceptual formations of social oppression and exclusion. It is information science's role to be vigilant in perpetuating seditious discourses which end up reaffirming offenses, prejudices and humiliations to certain groups of people, especially those labeled as marginalized, that is, who are not part of the dominant group holding social power. In the quest for this diversity, this study reviews the literature of the area on how thesauri can become more inclusive and on the role of semantic warrant, specific to the philosophical, literary and cultural warrant. This research highlights the need to review thesaurus construction models so that they can be more open and inclusive to the cultural diversity of today's society, formed by social actors who claim their spaces and representations. To this end, guidelines are suggested for the construction of thesauri procedures that allow cultural warrant receptivity.
  4. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.05
    0.051956084 = product of:
      0.20782433 = sum of:
        0.20782433 = weight(_text_:java in 3605) [ClassicSimilarity], result of:
          0.20782433 = score(doc=3605,freq=2.0), product of:
            0.44484076 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0631203 = queryNorm
            0.46718815 = fieldWeight in 3605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3605)
      0.25 = coord(1/4)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
  5. Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017) 0.05
    0.051956084 = product of:
      0.20782433 = sum of:
        0.20782433 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
          0.20782433 = score(doc=4615,freq=2.0), product of:
            0.44484076 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0631203 = queryNorm
            0.46718815 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4615)
      0.25 = coord(1/4)
    
    Abstract
    Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
  6. Marcial, L.H.; Hemminger, B.M.: Scientific data repositories on the Web : an initial survey (2010) 0.05
    0.05185284 = product of:
      0.20741136 = sum of:
        0.20741136 = weight(_text_:holding in 983) [ClassicSimilarity], result of:
          0.20741136 = score(doc=983,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.42605853 = fieldWeight in 983, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0390625 = fieldNorm(doc=983)
      0.25 = coord(1/4)
    
    Abstract
    Science Data Repositories (SDRs) have been recognized as both critical to science, and undergoing a fundamental change. A websample study was conducted of 100 SDRs. Information on the websites and from administrators of the SDRs was reviewed to determine salient characteristics of the SDRs, which were used to classify SDRs into groups using a combination of cluster analysis and logistic regression. Characteristics of the SDRs were explored for their role in determining groupings and for their relationship to the success of SDRs. Four of these characteristics were identified as important for further investigation: whether the SDR was supported with grants and contracts, whether support comes from multiple sponsors, what the holding size of the SDR is and whether a preservation policy exists for the SDR. An inferential framework for understanding SDR composition, guided by observations, characteristic collection and refinement and subsequent analysis on elements of group membership, is discussed. The development of SDRs is further examined from a business standpoint, and in comparison to its most similar form, institutional repositories. Because this work identifies important characteristics of SDRs and which characteristics potentially impact the sustainability and success of SDRs, it is expected to be helpful to SDRs.
  7. Benjamin, V.; Chen, H.; Zimbra, D.: Bridging the virtual and real : the relationship between web content, linkage, and geographical proximity of social movements (2014) 0.05
    0.05185284 = product of:
      0.20741136 = sum of:
        0.20741136 = weight(_text_:holding in 2527) [ClassicSimilarity], result of:
          0.20741136 = score(doc=2527,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.42605853 = fieldWeight in 2527, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2527)
      0.25 = coord(1/4)
    
    Abstract
    As the Internet becomes ubiquitous, it has advanced to more closely represent aspects of the real world. Due to this trend, researchers in various disciplines have become interested in studying relationships between real-world phenomena and their virtual representations. One such area of emerging research seeks to study relationships between real-world and virtual activism of social movement organization (SMOs). In particular, SMOs holding extreme social perspectives are often studied due to their tendency to have robust virtual presences to circumvent real-world social barriers preventing information dissemination. However, many previous studies have been limited in scope because they utilize manual data-collection and analysis methods. They also often have failed to consider the real-world aspects of groups that partake in virtual activism. We utilize automated data-collection and analysis methods to identify significant relationships between aspects of SMO virtual communities and their respective real-world locations and ideological perspectives. Our results also demonstrate that the interconnectedness of SMO virtual communities is affected specifically by aspects of the real world. These observations provide insight into the behaviors of SMOs within virtual environments, suggesting that the virtual communities of SMOs are strongly affected by aspects of the real world.
  8. Zuccala, A.; Someren, M. van; Bellen, M. van: ¬A machine-learning approach to coding book reviews as quality indicators : toward a theory of megacitation (2014) 0.05
    0.05185284 = product of:
      0.20741136 = sum of:
        0.20741136 = weight(_text_:holding in 2530) [ClassicSimilarity], result of:
          0.20741136 = score(doc=2530,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.42605853 = fieldWeight in 2530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2530)
      0.25 = coord(1/4)
    
    Abstract
    A theory of "megacitation" is introduced and used in an experiment to demonstrate how a qualitative scholarly book review can be converted into a weighted bibliometric indicator. We employ a manual human-coding approach to classify book reviews in the field of history based on reviewers' assessments of a book author's scholarly credibility (SC) and writing style (WS). In total, 100 book reviews were selected from the American Historical Review and coded for their positive/negative valence on these two dimensions. Most were coded as positive (68% for SC and 47% for WS), and there was also a small positive correlation between SC and WS (r = 0.2). We then constructed a classifier, combining both manual design and machine learning, to categorize sentiment-based sentences in history book reviews. The machine classifier produced a matched accuracy (matched to the human coding) of approximately 75% for SC and 64% for WS. WS was found to be more difficult to classify by machine than SC because of the reviewers' use of more subtle language. With further training data, a machine-learning approach could be useful for automatically classifying a large number of history book reviews at once. Weighted megacitations can be especially valuable if they are used in conjunction with regular book/journal citations, and "libcitations" (i.e., library holding counts) for a comprehensive assessment of a book/monograph's scholarly impact.
  9. Pazooki, F.; Zeinolabedini, M.H.; Arastoopoor, S.: Acceptance and viewpoint of iranian catalogers regarding RDA : the case of the National Library and Archive of Iran (2014) 0.05
    0.05185284 = product of:
      0.20741136 = sum of:
        0.20741136 = weight(_text_:holding in 2987) [ClassicSimilarity], result of:
          0.20741136 = score(doc=2987,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.42605853 = fieldWeight in 2987, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2987)
      0.25 = coord(1/4)
    
    Abstract
    The general purpose of this study is to assess the amount of catalogers' familiarity with Resource Description and Access (RDA) and their readiness for acceptance of these rules and the effect of training on this issue. The methodology of the presented research is a survey study using a descriptive-analytic approach. In this research, the familiarity of 49 catalogers, working for the Cataloging In Publication (CIP) department at the National Library and Archive of Iran with RDA was monitored before and after a training session through a questionnaire. It was specifically prepared for measuring catalogers' familiarity with, and acceptance of, RDA and also highlighting the self-identified and actual levels of this familiarity and acceptance. The results show that before training, catalogers' self-identified familiarity with RDA was higher than the average level. But after the training session, both self-identified and actual familiarity raised dramatically. Furthermore, the significant difference between the research population's features and self-identified, actual familiarity and the rules' acceptance rate among catalogers was examined. In this study, it was confirmed that there is a significant difference between self-stated and actual familiarity of catalogers regarding RDA. According to the results, M.A. catalogers have a self-identified familiarity higher than B.A. catalogers. It was also confirmed that the actual familiarity of catalogers with an M.A. degree before training is higher than catalogers holding a B.A.
  10. Zuccala, A.; Guns, R.; Cornacchia, R.; Bod, R.: Can we rank scholarly book publishers? : a bibliometric experiment with the field of history (2015) 0.05
    0.05185284 = product of:
      0.20741136 = sum of:
        0.20741136 = weight(_text_:holding in 3037) [ClassicSimilarity], result of:
          0.20741136 = score(doc=3037,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.42605853 = fieldWeight in 3037, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3037)
      0.25 = coord(1/4)
    
    Abstract
    This is a publisher ranking study based on a citation data grant from Elsevier, specifically, book titles cited in Scopus history journals (2007-2011) and matching metadata from WorldCat® (i.e., OCLC numbers, ISBN codes, publisher records, and library holding counts). Using both resources, we have created a unique relational database designed to compare citation counts to books with international library holdings or libcitations for scholarly book publishers. First, we construct a ranking of the top 500 publishers and explore descriptive statistics at the level of publisher type (university, commercial, other) and country of origin. We then identify the top 50 university presses and commercial houses based on total citations and mean citations per book (CPB). In a third analysis, we present a map of directed citation links between journals and book publishers. American and British presses/publishing houses tend to dominate the work of library collection managers and citing scholars; however, a number of specialist publishers from Europe are included. Distinct clusters from the directed citation map indicate a certain degree of regionalism and subject specialization, where some journals produced in languages other than English tend to cite books published by the same parent press. Bibliometric rankings convey only a small part of how the actual structure of the publishing field has evolved; hence, challenges lie ahead for developers of new citation indices for books and bibliometricians interested in measuring book and publisher impacts.
  11. Wakeling, S.; Clough, P.; Connaway, L.S.; Sen, B.; Tomás, D.: Users and uses of a global union catalog : a mixed-methods study of WorldCat.org (2017) 0.05
    0.05185284 = product of:
      0.20741136 = sum of:
        0.20741136 = weight(_text_:holding in 4794) [ClassicSimilarity], result of:
          0.20741136 = score(doc=4794,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.42605853 = fieldWeight in 4794, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4794)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents the first large-scale investigation of the users and uses of WorldCat.org, the world's largest bibliographic database and global union catalog. Using a mixed-methods approach involving focus group interviews with 120 participants, an online survey with 2,918 responses, and an analysis of transaction logs of approximately 15 million sessions from WorldCat.org, the study provides a new understanding of the context for global union catalog use. We find that WorldCat.org is accessed by a diverse population, with the three primary user groups being librarians, students, and academics. Use of the system is found to fall within three broad types of work-task (professional, academic, and leisure), and we also present an emergent taxonomy of search tasks that encompass known-item, unknown-item, and institutional information searches. Our results support the notion that union catalogs are primarily used for known-item searches, although the volume of traffic to WorldCat.org means that unknown-item searches nonetheless represent an estimated 250,000 sessions per month. Search engine referrals account for almost half of all traffic, but although WorldCat.org effectively connects users referred from institutional library catalogs to other libraries holding a sought item, users arriving from a search engine are less likely to connect to a library.
  12. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.05
    0.05185284 = product of:
      0.20741136 = sum of:
        0.20741136 = weight(_text_:holding in 4870) [ClassicSimilarity], result of:
          0.20741136 = score(doc=4870,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.42605853 = fieldWeight in 4870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4870)
      0.25 = coord(1/4)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  13. Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016) 0.04
    0.043296736 = product of:
      0.17318694 = sum of:
        0.17318694 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
          0.17318694 = score(doc=4179,freq=2.0), product of:
            0.44484076 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0631203 = queryNorm
            0.38932347 = fieldWeight in 4179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4179)
      0.25 = coord(1/4)
    
    Abstract
    In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
  14. Ziemba, L.: Information retrieval with concept discovery in digital collections for agriculture and natural resources (2011) 0.04
    0.041482273 = product of:
      0.1659291 = sum of:
        0.1659291 = weight(_text_:holding in 728) [ClassicSimilarity], result of:
          0.1659291 = score(doc=728,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.34084684 = fieldWeight in 728, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.03125 = fieldNorm(doc=728)
      0.25 = coord(1/4)
    
    Abstract
    The amount and complexity of information available in a digital form is already huge and new information is being produced every day. Retrieving information relevant to address a particular need becomes a significant issue. This work utilizes knowledge organization systems (KOS), such as thesauri and ontologies and applies information extraction (IE) and computational linguistics (CL) techniques to organize, manage and retrieve information stored in digital collections in the agricultural domain. Two real world applications of the approach have been developed and are available and actively used by the public. An ontology is used to manage the Water Conservation Digital Library holding a dynamic collection of various types of digital resources in the domain of urban water conservation in Florida, USA. The ontology based back-end powers a fully operational web interface, available at http://library.conservefloridawater.org. The system has demonstrated numerous benefits of the ontology application, including accurate retrieval of resources, information sharing and reuse, and has proved to effectively facilitate information management. The major difficulty encountered with the approach is that large and dynamic number of concepts makes it difficult to keep the ontology consistent and to accurately catalog resources manually. To address the aforementioned issues, a combination of IE and CL techniques, such as Vector Space Model and probabilistic parsing, with the use of Agricultural Thesaurus were adapted to automatically extract concepts important for each of the texts in the Best Management Practices (BMP) Publication Library--a collection of documents in the domain of agricultural BMPs in Florida available at http://lyra.ifas.ufl.edu/LIB. A new approach of domain-specific concept discovery with the use of Internet search engine was developed. Initial evaluation of the results indicates significant improvement in precision of information extraction. The approach presented in this work focuses on problems unique to agriculture and natural resources domain, such as domain specific concepts and vocabularies, but should be applicable to any collection of texts in digital format. It may be of potential interest for anyone who needs to effectively manage a collection of digital resources.
  15. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.04
    0.041482273 = product of:
      0.1659291 = sum of:
        0.1659291 = weight(_text_:holding in 3547) [ClassicSimilarity], result of:
          0.1659291 = score(doc=3547,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.34084684 = fieldWeight in 3547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.03125 = fieldNorm(doc=3547)
      0.25 = coord(1/4)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  16. Sud, P.; Thelwall, M.: Not all international collaboration is beneficial : the Mendeley readership and citation impact of biochemical research collaboration (2016) 0.04
    0.041482273 = product of:
      0.1659291 = sum of:
        0.1659291 = weight(_text_:holding in 4048) [ClassicSimilarity], result of:
          0.1659291 = score(doc=4048,freq=2.0), product of:
            0.48681426 = queryWeight, product of:
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.0631203 = queryNorm
            0.34084684 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.7124834 = idf(docFreq=53, maxDocs=44421)
              0.03125 = fieldNorm(doc=4048)
      0.25 = coord(1/4)
    
    Abstract
    This study aims to identify the way researchers collaborate with other researchers in the course of the scientific research life cycle and provide information to the designers of e-Science and e-Research implementations. On the basis of in-depth interviews with and on-site observations of 24 scientists and a follow-up focus group interview in the field of bioscience/nanoscience and technology in Korea, we examined scientific collaboration using the framework of the scientific research life cycle. We attempt to explain the major motiBiochemistry is a highly funded research area that is typified by large research teams and is important for many areas of the life sciences. This article investigates the citation impact and Mendeley readership impact of biochemistry research from 2011 in the Web of Science according to the type of collaboration involved. Negative binomial regression models are used that incorporate, for the first time, the inclusion of specific countries within a team. The results show that, holding other factors constant, larger teams robustly associate with higher impact research, but including additional departments has no effect and adding extra institutions tends to reduce the impact of research. Although international collaboration is apparently not advantageous in general, collaboration with the United States, and perhaps also with some other countries, seems to increase impact. In contrast, collaborations with some other nations seems to decrease impact, although both findings could be due to factors such as differing national proportions of excellent researchers. As a methodological implication, simpler statistical models would find international collaboration to be generally beneficial and so it is important to take into account specific countries when examining collaboration.t only in the beginning phase of the cycle. For communication and information-sharing practices, scientists continue to favor traditional means of communication for security reasons. Barriers to collaboration throughout the phases included different priorities, competitive tensions, and a hierarchical culture among collaborators, whereas credit sharing was a barrier in the research product phase.
  17. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.03
    0.034637388 = product of:
      0.13854955 = sum of:
        0.13854955 = weight(_text_:java in 935) [ClassicSimilarity], result of:
          0.13854955 = score(doc=935,freq=2.0), product of:
            0.44484076 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0631203 = queryNorm
            0.31145877 = fieldWeight in 935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=935)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
  18. Piros, A.: Automatic interpretation of complex UDC numbers : towards support for library systems (2015) 0.03
    0.034637388 = product of:
      0.13854955 = sum of:
        0.13854955 = weight(_text_:java in 3301) [ClassicSimilarity], result of:
          0.13854955 = score(doc=3301,freq=2.0), product of:
            0.44484076 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0631203 = queryNorm
            0.31145877 = fieldWeight in 3301, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=3301)
      0.25 = coord(1/4)
    
    Abstract
    Analytico-synthetic and faceted classifications, such as Universal Decimal Classification (UDC) express content of documents with complex, pre-combined classification codes. Without classification authority control that would help manage and access structured notations, the use of UDC codes in searching and browsing is limited. Existing UDC parsing solutions are usually created for a particular database system or a specific task and are not widely applicable. The approach described in this paper provides a solution by which the analysis and interpretation of UDC notations would be stored into an intermediate format (in this case, in XML) by automatic means without any data or information loss. Due to its richness, the output file can be converted into different formats, such as standard mark-up and data exchange formats or simple lists of the recommended entry points of a UDC number. The program can also be used to create authority records containing complex UDC numbers which can be comprehensively analysed in order to be retrieved effectively. The Java program, as well as the corresponding schema definition it employs, is under continuous development. The current version of the interpreter software is now available online for testing purposes at the following web site: http://interpreter-eto.rhcloud.com. The future plan is to implement conversion methods for standard formats and to create standard online interfaces in order to make it possible to use the features of software as a service. This would result in the algorithm being able to be employed both in existing and future library systems to analyse UDC numbers without any significant programming effort.
  19. Kübler, H.-D.: Digitale Vernetzung (2018) 0.02
    0.024013512 = product of:
      0.09605405 = sum of:
        0.09605405 = weight(_text_:und in 279) [ClassicSimilarity], result of:
          0.09605405 = score(doc=279,freq=32.0), product of:
            0.13999446 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0631203 = queryNorm
            0.6861275 = fieldWeight in 279, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=279)
      0.25 = coord(1/4)
    
    Abstract
    Vernetzung und Netzwerke finden sich allerorten, haben vielerlei Qualität und Materialität, erfüllen diverse Zwecke und Funktionen und konstituieren unterschiedliche Infrastrukturen, nicht nur kommunikativer und sozialer Art. Mit der Entwicklung und Verbreitung der Informationstechnik, der globalen Transport- und Vermittlungssysteme und endlich der anhaltenden Digitalisierung werden der Begriff und die damit bezeichnete Konnektivität omnipräsent und auf digitale Netze fokussiert, die im Internet als dem Netz der Netze seinen wichtigsten und folgenreichsten Prototypen findet. Dessen Entwicklung wird kompakt dargestellt. Die bereits vorhandenen und verfügbaren Anwendungsfelder sowie die künftigen (Industrie 4.0, Internet der Dinge) lassen revolutionäre Umbrüche in allen Segmenten der Gesellschaft erahnen, die von der nationalstaatlichen Gesetzgebung und Politik kaum mehr gesteuert und kontrolliert werden, neben unbestreitbar vielen Vorzügen und Verbesserungen aber auch Risiken und Benachteiligungen zeitigen können.
  20. Rusch, G.: Sicherheit und Freiheit (2015) 0.02
    0.021831578 = product of:
      0.08732631 = sum of:
        0.08732631 = weight(_text_:und in 3666) [ClassicSimilarity], result of:
          0.08732631 = score(doc=3666,freq=36.0), product of:
            0.13999446 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0631203 = queryNorm
            0.62378407 = fieldWeight in 3666, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.046875 = fieldNorm(doc=3666)
      0.25 = coord(1/4)
    
    Abstract
    Hier und heute bezeichnen die Worte Freiheit und Sicherheit vor allem jene politischen Begriffe, die den rhetorischen Referenzrahmen im sicherheitspolitischen Koordinatensystem unserer westlichen Demokratien nach Innen und Außen abstecken. Legitimatorische und agitatorische Diskurse, Wahlkampfrhetorik und Parlamentsdebatten, Zivilgesellschaft und politische Administration bemühen regelmäßig und formelhaft Begriffe von Freiheit und Sicherheit für ihre jeweiligen Zwecke. Dabei werden die Begriffe oft in ein oppositionelles Verhältnis zueinander gesetzt: Mehr (z.B. innenpolitische) Sicherheit bedeutet dann weniger (z.B. persönliche) Freiheit, und umgekehrt. Oder Sicherheit wird zur Voraussetzung und Bedingung von Freiheit (z.B. in der "wehrhaften Demokratie"). Die operationalen Wurzeln dieser Begrifflichkeit in der Wahrnehmung, im Verhalten und Handeln gelangen dabei jedoch weit aus dem Blick. Welche initialen und konsolidierten Eindrücke, Einsichten und Erfahrungen sind es, auf die wir uns affektiv und rational mit diesen Begriffen beziehen? Wie fühlt sich Sicherheit an? Wie sieht Verhalten oder Handeln als Ausdruck von Freiheit aus? Kann man Freiheit spüren? Zu welcher Freiheit ist man überhaupt fähig? Wieviel Sicherheit ist für das Leben nötig? Welche operationalen Evidenzen bieten Wahrnehmung und Verhalten für die Begriffe der Sicherheit und Freiheit vor all ihren ideologischen Aufladungen, historischen Interpretationen und philosophischen Explikationen?

Languages

  • d 1631
  • e 228
  • a 1
  • More… Less…

Types

  • a 1388
  • el 405
  • m 280
  • x 75
  • s 60
  • r 26
  • n 8
  • i 3
  • b 2
  • p 2
  • v 2
  • ms 1
  • More… Less…

Themes

Subjects

Classifications