Search (1981 results, page 6 of 100)

  • × year_i:[2010 TO 2020}
  1. Vanopstal, K.; Stichele, R.Vander; Laureys, G.; Buysschaert, J.: PubMed searches by Dutch-speaking nursing students : the impact of language and system experience (2012) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 1369) [ClassicSimilarity], result of:
          0.08668471 = score(doc=1369,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1369)
      0.25 = coord(1/4)
    
    Abstract
    This study analyzes the search behavior of Dutch-speaking nursing students with a nonnative knowledge of English who searched for information in MEDLINE/PubMed about a specific theme in nursing. We examine whether and to what extent their search efficiency is affected by their language skills. Our task-oriented approach focuses on three stages of the information retrieval process: need articulation, query formulation, and relevance judgment. The test participants completed a pretest questionnaire, which gave us information about their overall experience with the search system and their self-reported computer and language skills. The students were briefly introduced to the use of PubMed and MeSH (medical subject headings) before they conducted their keyword-driven subject search. We assessed the search results in terms of recall and precision, and also analyzed the search process. After the search task, a satisfaction survey and a language test were completed. We conclude that language skills have an impact on the search results. We hypothesize that language support might improve the efficiency of searches conducted by Dutch-speaking users of PubMed.
  2. Wartena, C.; Sommer, M.: Automatic classification of scientific records using the German Subject Heading Authority File (SWD) (2012) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 1472) [ClassicSimilarity], result of:
          0.08668471 = score(doc=1472,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 1472, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1472)
      0.25 = coord(1/4)
    
    Abstract
    The following paper deals with an automatic text classification method which does not require training documents. For this method the German Subject Heading Authority File (SWD), provided by the linked data service of the German National Library is used. Recently the SWD was enriched with notations of the Dewey Decimal Classification (DDC). In consequence it became possible to utilize the subject headings as textual representations for the notations of the DDC. Basically, we we derive the classification of a text from the classification of the words in the text given by the thesaurus. The method was tested by classifying 3826 OAI-Records from 7 different repositories. Mean reciprocal rank and recall were chosen as evaluation measure. Direct comparison to a machine learning method has shown that this method is definitely competitive. Thus we can conclude that the enriched version of the SWD provides high quality information with a broad coverage for classification of German scientific articles.
  3. Julien, C.-A.; Tirilly, P.; Leide, J.E.; Guastavino, C.: Using the LCSH hierarchy to browse a collection (2012) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 1834) [ClassicSimilarity], result of:
          0.08668471 = score(doc=1834,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 1834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1834)
      0.25 = coord(1/4)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) is a subject structure used to index large collections throughout the world. Browsing a collection through LCSH is difficult using current on-line tools in part because they are inadequately integrated with information collections. Users of these LCSH browsing tools are expected to find a promising LCSH string before using it to search for the information itself; many users do not have the patience for such a two-step process. This article proposes a method to fully integrate a specific collection in its subset of the LCSH hierarchy in order to facilitate LCSH browsing as well as information retrieval. Techniques are described to match LCSH strings assigned to the collection with an established string from the authority records, and build their specific LCSH hierarchy. The resulting subset of LCSH structure is described in terms of its size and broader/narrower term statistics, and implications for browsing and information retrieval are discussed. The results of this research have implications for institutions wishing to further capitalize on existing LCSH organization investments for the purpose of subject browsing and information retrieval.
  4. Buckland, M.K.: Knowledge organization and the technology of intellectual work (2014) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 2399) [ClassicSimilarity], result of:
          0.08668471 = score(doc=2399,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 2399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2399)
      0.25 = coord(1/4)
    
    Abstract
    Since ancient times intellectual work has required tools for writing, documents for reading, and bibliographies for finding, not to mention more specialized techniques and technologies. Direct personal discussion is often impractical and we depend on documents instead. Document technology evolved through writing, printing, telecommunications, copying, and computing and facilitated an 'information flood' which motivated important knowledge organization initiatives, especially in the nineteenth century (library science, bibliography, documentation). Electronics and the Internet amplified these trends. As an example we consider an initiative to provide shared access to the working notes of editors preparing scholarly editions of historically important texts. For the future, we can project trends leading to ubiquitous recording, pervasive representations, simultaneous interaction regardless of geography, and powerful analysis and visualization of the records resulting from that ubiquitous recording. This evolving situation has implications for publishing, archival practice, and knowledge organization. The passing of time is of special interest in knowledge organization because knowing is cultural, living, and always changing. Technique and technology are also cultural ("material culture") but fixed and inanimate, as can be seen in the obsolescence of subject headings, which remain inscribed while culture moves on. The tension between the benefits of technology and the limitations imposed by fixity in a changing world provide a central tension in knowledge organization over time.
  5. Taheri, S.M.; Shahrestani, Z.; Nezhad, M.H.Y.: Switching languages and the national content consortiums : an overview on the challenges of designing an Iranian model (2014) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 2447) [ClassicSimilarity], result of:
          0.08668471 = score(doc=2447,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 2447, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2447)
      0.25 = coord(1/4)
    
    Abstract
    The aim of this study, as a conceptual research, is to analyze the challenges of designing a switching language for the Iranian National Content Consortium (INCC) by an analytical-critical approach. The current situation of the semantic systems which have been being constructed and developed in Iran and the challenges of designing a switching language for the INCC are examined. To be approximation of mapping among the subjective terms of the Iranian semantic systems, such as thesauri, the list of subject headings, and classification schemes, the ambiguity of the native features of Islamic-Iranian information context, the lack of a general and comprehensive classification schema, the accessibility of content objects in other non-Persian languages, and like them, are the most important challenges for designing an Iranian model of the switching language for the INCC. The study is the first in its kind dealing with the challenges of designing a switching language in a practical approach that emphasize the information environment of the INCC.
  6. Rafferty, P.; Murphy, H.: Is there nothing outside the tags? : towards a poststructuralist analysis of social tagging (2015) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 2792) [ClassicSimilarity], result of:
          0.08668471 = score(doc=2792,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 2792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2792)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of the research is to explore relationships between social tagging and key poststructuralist principles; to devise and construct an analytical framework through which key poststructuralist principles are converted into workable research questions and applied to analyse Librarything tags, and to assess the validity of performing such an analysis. The research hypothesis is that tagging represents an imperfect analogy for the poststructuralist project Design/methodology/approach Tags from LibraryThing and from a library OPAC were compared and constrasted with Library of Congress Subject Headings (LCSH) and publishers' descriptions. Research questions derived from poststructuralism, asked whether tags destabilise meaning, whether and how far the death of the author is expressed in tags, and whether tags deconstruct LCSH. Findings Tags can temporarily destabilise meaning by obfuscating the structure of a word. Meaning is destabilised, perhaps only momentarily, and then it is recreated; it might resemble the original meaning, or it may not, however any attempt to make tags useful or functional necessarily imposes some form of structure. The analysis indicates that in tagging, the author, if not dead, is ignored. Authoritative interpretations are not pervasively mimicked in the tags. In relation to LCSH, tagging decentres the dominant view, but neither exposes nor judges it. Nor does tagging achieve the final stage of the deconstructive process, showing the dominant view to be a constructed reality. Originality/value This is one of very few studies to have attempted a critical theoretical approach to social tagging. It offers a novel methodological approach to undertaking analysis based on poststructuralist theory.
  7. Casson, E.; Fabbrizzi, A.; Slavic, A.: Subject search in Italian OPACs : an opportunity in waiting? (2011) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 2801) [ClassicSimilarity], result of:
          0.08668471 = score(doc=2801,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 2801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2801)
      0.25 = coord(1/4)
    
    Abstract
    Subject access to bibliographic data supported by knowledge organization systems, such as subject headings and classification, plays an important role in ensuring the quality of library catalogues. It is generally acknowledged that users have a strong affinity to subject browsing and searching and are inclined tofollow meaningful links between resources. Research studies, however, show that library OPACs are not designed to support or make good use of subject indexes and their underlying semantic structure. A project entitled OPAC semantici was initiated in 2003 by a number of Italian subject specialists and the Italian "Research Group on Subject Indexing" (GRIS) with a goal to analyse and evaluate subject access in Italian library catalogues through a survey of 150 OPACs. Applying the same methodology, a follow-up survey to assess whether any improvement had taken place was conducted five years later, in spring 2008. Analysis of these two surveys indicated that there was a slight improvement. The authors discuss the results of these two surveys, analyse the problems in subject searching in OPACs and explain the recommendations for subject searching enhancement put forward by GRIS. Using the example of Italian OPACs, the authors will attempt to outline some requirements for a subject searching interface and explain how this can be achieved through authority control.
  8. Zhang, X.; Liu, J.; Cole, M.; Belkin, N.: Predicting users' domain knowledge in information retrieval using multiple regression analysis of search behaviors (2015) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 2822) [ClassicSimilarity], result of:
          0.08668471 = score(doc=2822,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2822)
      0.25 = coord(1/4)
    
    Abstract
    User domain knowledge affects search behaviors and search success. Predicting a user's knowledge level from implicit evidence such as search behaviors could allow an adaptive information retrieval system to better personalize its interaction with users. This study examines whether user domain knowledge can be predicted from search behaviors by applying a regression modeling analysis method. We identify behavioral features that contribute most to a successful prediction model. A user experiment was conducted with 40 participants searching on task topics in the domain of genomics. Participant domain knowledge level was assessed based on the users' familiarity with and expertise in the search topics and their knowledge of MeSH (Medical Subject Headings) terms in the categories that corresponded to the search topics. Users' search behaviors were captured by logging software, which includes querying behaviors, document selection behaviors, and general task interaction behaviors. Multiple regression analysis was run on the behavioral data using different variable selection methods. Four successful predictive models were identified, each involving a slightly different set of behavioral variables. The models were compared for the best on model fit, significance of the model, and contributions of individual predictors in each model. Each model was validated using the split sampling method. The final model highlights three behavioral variables as domain knowledge level predictors: the number of documents saved, the average query length, and the average ranking position of the documents opened. The results are discussed, study limitations are addressed, and future research directions are suggested.
  9. Smiraglia, R.P.: Keywords redux : an editorial (2015) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 3099) [ClassicSimilarity], result of:
          0.08668471 = score(doc=3099,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 3099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3099)
      0.25 = coord(1/4)
    
    Abstract
    In KO volume 40 number 3 (2013) I included an editorial about keywords-both about the absence prior to that date of designated keywords in articles in Knowledge Organization, and about the misuse of the idea by some other journal publications (Smiraglia 2013). At the time I was chagrined to discover how little correlation there was across the formal indexing of a small set of papers from our journal, and especially to see how little correspondence there was between actual keywords appearing in the published texts, and any of the indexing supplied by either Web of Science or LISTA (Thomson Reuters' Web of ScienceT (WoS) and EBSCOHost's Library and Information Science and Technology Abstracts with Full Text (LISTA). The idea of a keyword arose in the early days of automated indexing, when it was discovered that using terms that actually occurred in full texts (or, in the earliest days, in titles and abstracts) as search "keys," usually in Boolean combinations, provided fairly precise recall in small, contextually confined text corpora. A recent Wikipedia entry (Keywords 2015) embues keywords with properties of structural reasoning, but notes that they are "key" among the most frequently occurring terms in a text corpus. The jury is still out on whether keyword retrieval is better than indexing with subject headings, but in general, keyword searches in large, unstructured text corpora (which is what we have today) are imprecise and result in large recall sets with many irrelevant hits (see the recent analysis by Gross, Taylor and Joudrey (2014). Thus it seems inadvisable to me, as editor, especially of a journal on knowledge organization, to facilitate imprecise indexing of our journal's content.
  10. Kempf, A.O.; Neubert, J.; Faden, M.: ¬The missing link : a vocabulary mapping effort in economics (2015) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 3251) [ClassicSimilarity], result of:
          0.08668471 = score(doc=3251,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 3251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3251)
      0.25 = coord(1/4)
    
    Abstract
    In economics there exists an internationally established classification system. Research literature is usually classified according to the JEL classification codes, a classification system originated by the Journal of Economic Literature and published by the American Economic Association (AEA). Complementarily to keywords which are usually assigned freely, economists widely use the JEL codes when classifying their publications. In cooperation with KU Leuven, ZBW - Leibniz Information Centre for Economics has published an unofficial multilingual version of JEL in SKOS format. In addition to this, exists the STW Thesaurus for Economics a bilingual domain-specific controlled vocabulary maintained by the German National Library of Economics (ZBW). Developed in the mid-1990s and since then constantly updated according to the current terminology usage in the latest international research literature in economics it covers all sub-fields both in the economics as well as in business economics and business practice containing subject headings which are clearly delimited from each other. It has been published on the web as Linked Open Data in the year 2009.
  11. Balíková, M.: Subject authority control supported by classification : the case of National Library of the Czech Republic (2015) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 3294) [ClassicSimilarity], result of:
          0.08668471 = score(doc=3294,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 3294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3294)
      0.25 = coord(1/4)
    
    Abstract
    From the very beginnings of library automation, subject authority control has been considered an important bibliographic tool in the Czech National Library (CNL). Effective subject access cannot exist without standardised access points. Subject authorities are considered an indispensable reference tool in supporting the selection of subject access points and normalizing content indexing. Most importantly, they are heavily relied upon when it comes to customisation of links between bibliographic records and subject access points in order to create a user-friendly subject browsing and searching environment. Because of the fact that the Universal Decimal Classification (UDC) is widely used in Czech Libraries it has become a readily available language independent subject framework which can be complemented by a more user-friendly subject heading system. In this context, the subject authority control offers a means of enhancing subject headings' access points with terminology and the semantic links available in UDC. Furthermore classification is used to enrich relationships between authority records themselves. The author will discuss in more detail the different aspects and advantages of subject authorities in which a classification and a subject heading system complement one another and the way this is implemented in the CNL.
  12. Francu, V.; Dediu, L.-I.: TinREAD - an integrative solution for subject authority control (2015) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 3297) [ClassicSimilarity], result of:
          0.08668471 = score(doc=3297,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 3297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3297)
      0.25 = coord(1/4)
    
    Abstract
    The paper introduces TinREAD (The Information Navigator for Readers), an integrated library system produced by IME Romania. The main feature of interest is the way TinREAD can handle a classification-based thesaurus in which verbal index terms are mapped to classification notations. It supports subject authority control interlinking the authority files (subject headings and UDC system). Authority files are used for indexing consistency. Although it is said that intellectual indexing is, unlike automated indexing, both subjective and inconsistent, TinREAD is using intellectual indexing as input (the UDC notations assigned to documents) for the automated indexing resulting from the implementation of a thesaurus structure based on UDC. Each UDC notation is represented by a UNIMARC subject heading record as authority data. One classification notation can be used to search simultaneously into more than one corresponding thesaurus. This way natural language terms are used in indexing and, at the same time, the link with the corresponding classification notation is kept. Additionally, the system can also manage multilingual data for the authority files. This, together with other characteristics of TinREAD are largely discussed and illustrated in the paper. Problems encountered and possible solutions to tackle them are shown.
  13. Kaplan, A.G.; Riedling, A.M.: Catalog it! : a guide to cataloging school library materials (2015) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 3379) [ClassicSimilarity], result of:
          0.08668471 = score(doc=3379,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 3379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3379)
      0.25 = coord(1/4)
    
    Abstract
    This invaluable cataloging resource gives pre-service and practicing school library media specialists the tools they need to be intelligent consumers of commercial cataloging and competent organizers of new materials in their collections. The second edition contains expanded information on Library of Congress Subject Headings and electronic cataloging and cataloging systems, as well as Dewey Decimal Classification (DDC) and Machine Readable Cataloging (MARC). Whether you're a practicing cataloger looking for a short text to update you on the application of RDA to cataloging records or a school librarian who needs a quick resource to answer cataloging questions, this guide is for you. - Thoroughly updates a best-selling, essential guide to cataloging - Addresses the new standards specifically as they apply to school libraries - Helps school librarians understand and implement the new cataloging standards in their collections - Distills the latest information and presents it in a format that is clear and accessible - Fills the need for up-to-the-minute cataloging guidance for the busy librarian who wants information in a hurry
  14. Kwaonik, B.H.: ¬The Web and the pyramid : Hope Olson's vision of connectedness in a world of hierarchies (2016) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 4175) [ClassicSimilarity], result of:
          0.08668471 = score(doc=4175,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 4175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4175)
      0.25 = coord(1/4)
    
    Abstract
    Hope Olson's mission is to analyze our traditional knowledge-representation systems from the point of view of those whose voices are not well reflected. Her focus is not only on the content of these schemes but also, and perhaps especially, on their structures. There is no structure more established than the hierarchy, and yet the hierarchy makes assumptions and imposes rules that have shaped our world view. In her 2007 Library Trends article, "How We Construct Subjects: A Feminist Analysis," she takes apart the the notions behind hierarchies and brings to bear feminist thinking to offer a penetrating critique followed by a careful evaluation of implications. By way of examples she explores several existing schemes: The Dewey Decimal Classification, thesauri, and the Library of Congress Subject Headings to demonstrate how there do exist ameliorating (non hierarchical) techniques, but how they do not adequately solve the problem. Having laid out the limitations of our existing tools, both in content and in structure, she suggests rewriting and restructuring our schemes so that the all-important connections are visible-a web instead of a hierarchy. The article, written almost a decade ago, continues to be prophetic of what modern approaches and ways of thinking can achieve. As such, an analysis of the article serves here as a way of explicating Hope's rich and penetrating intellectual contributions and her critical yet hopeful vision.
  15. Strobel, S.; Marín-Arraiza, P.: Metadata for scientific audiovisual media : current practices and perspectives of the TIB / AV-portal (2015) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 4667) [ClassicSimilarity], result of:
          0.08668471 = score(doc=4667,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 4667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4667)
      0.25 = coord(1/4)
    
    Abstract
    Descriptive metadata play a key role in finding relevant search results in large amounts of unstructured data. However, current scientific audiovisual media are provided with little metadata, which makes them hard to find, let alone individual sequences. In this paper, the TIB / AV-Portal is presented as a use case where methods concerning the automatic generation of metadata, a semantic search and cross-lingual retrieval (German/English) have already been applied. These methods result in a better discoverability of the scientific audiovisual media hosted in the portal. Text, speech, and image content of the video are automatically indexed by specialised GND (Gemeinsame Normdatei) subject headings. A semantic search is established based on properties of the GND ontology. The cross-lingual retrieval uses English 'translations' that were derived by an ontology mapping (DBpedia i. a.). Further ways of increasing the discoverability and reuse of the metadata are publishing them as Linked Open Data and interlinking them with other data sets.
  16. Szostak, R.: Facet analysis using grammar (2017) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 4866) [ClassicSimilarity], result of:
          0.08668471 = score(doc=4866,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4866)
      0.25 = coord(1/4)
    
    Abstract
    Basic grammar can achieve most/all of the goals of facet analysis without requiring the use of facet indicators. Facet analysis is thus rendered far simpler for classificationist, classifier, and user. We compare facet analysis and grammar, and show how various facets can be represented grammatically. We then address potential challenges in employing grammar as subject classification. A detailed review of basic grammar supports the hypothesis that it is feasible to usefully employ grammatical construction in subject classification. A manageable - and programmable - set of adjustments is required as classifiers move fairly directly from sentences in a document (or object or idea) description to formulating a subject classification. The user likewise can move fairly quickly from a query to the identification of relevant works. A review of theories in linguistics indicates that a grammatical approach should reduce ambiguity while encouraging ease of use. This paper applies the recommended approach to a small sample of recently published books. It finds that the approach is feasible and results in a more precise subject description than the subject headings assigned at present. It then explores PRECIS, an indexing system developed in the 1970s. Though our approach differs from PRECIS in many important ways, the experience of PRECIS supports our conclusions regarding both feasibility and precision.
  17. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2017) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 4868) [ClassicSimilarity], result of:
          0.08668471 = score(doc=4868,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 4868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4868)
      0.25 = coord(1/4)
    
    Abstract
    Gross et al. (2015) have demonstrated that about a quarter of hits would typically be lost to keyword searchers if contemporary academic library catalogs dropped their controlled subject headings. This paper reports on an analysis of the loss levels that would result if a bibliographic database, namely the Australian Education Index (AEI), were missing the subject descriptors and identifiers assigned by its professional indexers, employing the methodology developed by Gross and Taylor (2005), and later by Gross et al. (2015). The results indicate that AEI users would lose a similar proportion of hits per query to that experienced by library catalog users: on average, 27% of the resources found by a sample of keyword queries on the AEI database would not have been found without the subject indexing, based on the Australian Thesaurus of Education Descriptors (ATED). The paper also discusses the methodological limitations of these studies, pointing out that real-life users might still find some of the resources missed by a particular query through follow-up searches, while additional resources might also be found through iterative searching on the subject vocabulary. The paper goes on to describe a new research design, based on a before - and - after experiment, which addresses some of these limitations. It is argued that this alternative design will provide a more realistic picture of the value that professionally assigned subject indexing and controlled subject vocabularies can add to literature searching of a more scholarly and thorough kind.
  18. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2018) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 300) [ClassicSimilarity], result of:
          0.08668471 = score(doc=300,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=300)
      0.25 = coord(1/4)
    
    Abstract
    Gross et al. (2015) have demonstrated that about a quarter of hits would typically be lost to keyword searchers if contemporary academic library catalogs dropped their controlled subject headings. This article reports on an investigation of the search value that subject descriptors and identifiers assigned by professional indexers add to a bibliographic database, namely the Australian Education Index (AEI). First, a similar methodology to that developed by Gross et al. (2015) was applied, with keyword searches representing a range of educational topics run on the AEI database with and without its subject indexing. The results indicated that AEI users would also lose, on average, about a quarter of hits per query. Second, an alternative research design was applied in which an experienced literature searcher was asked to find resources on a set of educational topics on an AEI database stripped of its subject indexing and then asked to search for additional resources on the same topics after the subject indexing had been reinserted. In this study, the proportion of additional resources that would have been lost had it not been for the subject indexing was again found to be about a quarter of the total resources found for each topic, on average.
  19. Fujita, M.; Lopes, L.; Moreira, W.; Piovezan dos Santos, L.B.; Andrade e Cruz, M.C.; Rodrigues de Barros Ribas, R.: Construction and evaluation of hierarchical structures of indexing languages for online catalogs of libraries : an experience of the São Paulo State University (UNESP) (2018) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 307) [ClassicSimilarity], result of:
          0.08668471 = score(doc=307,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 307, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=307)
      0.25 = coord(1/4)
    
    Abstract
    The construction and updating of indexing languages depend on the organization of their hierarchical structures in order to determine the classification of related terms and, above all, to allow a constant updating of vocabulary, a condition for knowledge evolution. The elaboration of an indexing language for online catalogs of libraries' networks is important considering the diversity and specificity of knowledge areas. From this perspective, the present paper reports on the work of a team of catalogers and researchers engaged in the construction of a hierarchical structure of an indexing language for an online catalog of a university library's network. The work on hierarchical structures began by defining the categories and subcategories that form the indexing language macrostructure by using the parameters of the Library of Congress Subject Headings , the National Library Terminology and the Vocabulary of the University of São Paulo Library's system. Throughout the stages of the elaboration process of the macrostructure, difficulties and improvements were observed and discussed. The results enabled the assessment of the hierarchical structures of the languages used in the organization of the superordinate and subordinate terms, which has contributed to the systematization of operational procedures contained in an indexing language manual for online catalogs of libraries.
  20. Losee, R.M.: Improving collection browsing : small world networking and Gray code ordering (2017) 0.02
    0.021671178 = product of:
      0.08668471 = sum of:
        0.08668471 = weight(_text_:headings in 148) [ClassicSimilarity], result of:
          0.08668471 = score(doc=148,freq=2.0), product of:
            0.32337824 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06664293 = queryNorm
            0.26805982 = fieldWeight in 148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0390625 = fieldNorm(doc=148)
      0.25 = coord(1/4)
    
    Abstract
    Documents in digital and paper libraries may be arranged, based on their topics, in order to facilitate browsing. It may seem intuitively obvious that ordering documents by their subject should improve browsing performance; the results presented in this article suggest that ordering library materials by their Gray code values and through using links consistent with the small world model of document relationships is consistent with improving browsing performance. Below, library circulation data, including ordering with Library of Congress Classification numbers and Library of Congress Subject Headings, are used to provide information useful in generating user-centered document arrangements, as well as user-independent arrangements. Documents may be linearly arranged so they can be placed in a line by topic, such as on a library shelf, or in a list on a computer display. Crossover links, jumps between a document and another document to which it is not adjacent, can be used in library databases to allow additional paths that one might take when browsing. The improvement that is obtained with different combinations of document orderings and different crossovers is examined and applications suggested.

Languages

  • d 1631
  • e 321
  • a 1
  • More… Less…

Types

  • a 1470
  • el 414
  • m 286
  • x 74
  • s 62
  • r 26
  • n 9
  • i 4
  • b 3
  • p 2
  • v 2
  • ms 1
  • More… Less…

Themes

Subjects

Classifications