Search (1401 results, page 4 of 71)

  • × language_ss:"e"
  1. Moreira Orengo, V.; Huyck, C.: Relevance feedback and cross-language information retrieval (2006) 0.06
    0.06381464 = product of:
      0.25525856 = sum of:
        0.25525856 = weight(_text_:judge in 1970) [ClassicSimilarity], result of:
          0.25525856 = score(doc=1970,freq=2.0), product of:
            0.49805635 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.06442181 = queryNorm
            0.5125094 = fieldWeight in 1970, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.046875 = fieldNorm(doc=1970)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents a study of relevance feedback in a cross-language information retrieval environment. We have performed an experiment in which Portuguese speakers are asked to judge the relevance of English documents; documents hand-translated to Portuguese and documents automatically translated to Portuguese. The goals of the experiment were to answer two questions (i) how well can native Portuguese searchers recognise relevant documents written in English, compared to documents that are hand translated and automatically translated to Portuguese; and (ii) what is the impact of misjudged documents on the performance improvement that can be achieved by relevance feedback. Surprisingly, the results show that machine translation is as effective as hand translation in aiding users to assess relevance in the experiment. In addition, the impact of misjudged documents on the performance of RF is overall just moderate, and varies greatly for different query topics.
  2. Leroy, G.; Miller, T.; Rosemblat, G.; Browne, A.: ¬A balanced approach to health information evaluation : a vocabulary-based naïve Bayes classifier and readability formulas (2008) 0.06
    0.06381464 = product of:
      0.25525856 = sum of:
        0.25525856 = weight(_text_:judge in 2998) [ClassicSimilarity], result of:
          0.25525856 = score(doc=2998,freq=2.0), product of:
            0.49805635 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.06442181 = queryNorm
            0.5125094 = fieldWeight in 2998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.046875 = fieldNorm(doc=2998)
      0.25 = coord(1/4)
    
    Abstract
    Since millions seek health information online, it is vital for this information to be comprehensible. Most studies use readability formulas, which ignore vocabulary, and conclude that online health information is too difficult. We developed a vocabularly-based, naïve Bayes classifier to distinguish between three difficulty levels in text. It proved 98% accurate in a 250-document evaluation. We compared our classifier with readability formulas for 90 new documents with different origins and asked representative human evaluators, an expert and a consumer, to judge each document. Average readability grade levels for educational and commercial pages was 10th grade or higher, too difficult according to current literature. In contrast, the classifier showed that 70-90% of these pages were written at an intermediate, appropriate level indicating that vocabulary usage is frequently appropriate in text considered too difficult by readability formula evaluations. The expert considered the pages more difficult for a consumer than the consumer did.
  3. Cosijn, E.: Relevance judgments and measurements (2009) 0.06
    0.06381464 = product of:
      0.25525856 = sum of:
        0.25525856 = weight(_text_:judge in 842) [ClassicSimilarity], result of:
          0.25525856 = score(doc=842,freq=2.0), product of:
            0.49805635 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.06442181 = queryNorm
            0.5125094 = fieldWeight in 842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.046875 = fieldNorm(doc=842)
      0.25 = coord(1/4)
    
    Abstract
    Users intuitively know which documents are relevant when they see them. Formal relevance assessment, however, is a complex issue. In this entry relevance assessment are described both from a human perspective and a systems perspective. Humans judge relevance in terms of the relation between the documents retrieved and the way in which these documents are understood and used. This is a subjective and personal judgment and is called user relevance. Systems compute a function between the query and the document features that the systems builders believe will cause documents to be ranked by the likelihood that a user will find the documents relevant. This is an objective measurement of relevance in terms of relations between the query and the documents retrieved-this is called system relevance (or sometimes similarity).
  4. Luyt, B.: ¬The inclusivity of Wikipedia and the drawing of expert boundaries : an examination of talk pages and reference lists (2012) 0.06
    0.06381464 = product of:
      0.25525856 = sum of:
        0.25525856 = weight(_text_:judge in 1391) [ClassicSimilarity], result of:
          0.25525856 = score(doc=1391,freq=2.0), product of:
            0.49805635 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.06442181 = queryNorm
            0.5125094 = fieldWeight in 1391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.046875 = fieldNorm(doc=1391)
      0.25 = coord(1/4)
    
    Abstract
    Wikipedia is frequently viewed as an inclusive medium. But inclusivity within this online encyclopedia is not a simple matter of just allowing anyone to contribute. In its quest for legitimacy as an encyclopedia, Wikipedia relies on outsiders to judge claims championed by rival editors. In choosing these experts, Wikipedians define the boundaries of acceptable comment on any given subject. Inclusivity then becomes a matter of how the boundaries of expertise are drawn. In this article I examine the nature of these boundaries and the implications they have for inclusivity and credibility as revealed through the talk pages produced and sources used by a particular subset of Wikipedia's creators-those involved in writing articles on the topic of Philippine history.
  5. Wang, X.; Hong, Z.; Xu, Y.(C.); Zhang, C.; Ling, H.: Relevance judgments of mobile commercial information (2014) 0.06
    0.06381464 = product of:
      0.25525856 = sum of:
        0.25525856 = weight(_text_:judge in 2301) [ClassicSimilarity], result of:
          0.25525856 = score(doc=2301,freq=2.0), product of:
            0.49805635 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.06442181 = queryNorm
            0.5125094 = fieldWeight in 2301, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.046875 = fieldNorm(doc=2301)
      0.25 = coord(1/4)
    
    Abstract
    In the age of mobile commerce, users receive floods of commercial messages. How do users judge the relevance of such information? Is their relevance judgment affected by contextual factors, such as location and time? How do message content and contextual factors affect users' privacy concerns? With a focus on mobile ads, we propose a research model based on theories of relevance judgment and mobile marketing research. We suggest topicality, reliability, and economic value as key content factors and location and time as key contextual factors. We found mobile relevance judgment is affected mainly by content factors, whereas privacy concerns are affected by both content and contextual factors. Moreover, topicality and economic value have a synergetic effect that makes a message more relevant. Higher topicality and location precision exacerbate privacy concerns, whereas message reliability alleviates privacy concerns caused by location precision. These findings reveal an interesting intricacy in user relevance judgment and privacy concerns and provide nuanced guidance for the design and delivery of mobile commercial information.
  6. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.06
    0.06381464 = product of:
      0.25525856 = sum of:
        0.25525856 = weight(_text_:judge in 1863) [ClassicSimilarity], result of:
          0.25525856 = score(doc=1863,freq=2.0), product of:
            0.49805635 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.06442181 = queryNorm
            0.5125094 = fieldWeight in 1863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.046875 = fieldNorm(doc=1863)
      0.25 = coord(1/4)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
  7. Hill, L.: New Protocols for Gazetteer and Thesaurus Services (2002) 0.06
    0.06300362 = product of:
      0.12600724 = sum of:
        0.014004949 = weight(_text_:und in 2206) [ClassicSimilarity], result of:
          0.014004949 = score(doc=2206,freq=2.0), product of:
            0.14288108 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06442181 = queryNorm
            0.098018214 = fieldWeight in 2206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.03125 = fieldNorm(doc=2206)
        0.1120023 = weight(_text_:handling in 2206) [ClassicSimilarity], result of:
          0.1120023 = score(doc=2206,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.27719125 = fieldWeight in 2206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.03125 = fieldNorm(doc=2206)
      0.5 = coord(2/4)
    
    Abstract
    The Alexandria Digital Library Project announces the online publication of two protocols to support querying and response interactions using distributed services: one for gazetteers and one for thesauri. These protocols have been developed for our own purposes and also to support the general interoperability of gazetteers and thesauri on the web. See <http://www.alexandria.ucsb.edu/~gjanee/gazetteer/> and <http://www.alexandria.ucsb.edu/~gjanee/thesaurus/>. For the gazetteer protocol, we have provided a page of test forms that can be used to experiment with the operational functions of the protocol in accessing two gazetteers: the ADL Gazetteer and the ESRI Gazetteer (ESRI has participated in the development of the gazetteer protocol). We are in the process of developing a thesaurus server and a simple client to demonstrate the use of the thesaurus protocol. We are soliciting comments on both protocols. Please remember that we are seeking protocols that are essentially "simple" and easy to implement and that support basic operations - they should not duplicate all of the functions of specialized gazetteer and thesaurus interfaces. We continue to discuss ways of handling various issues and to further develop the protocols. For the thesaurus protocol, outstanding issues include the treatment of multilingual thesauri and the degree to which the language attribute should be supported; whether the Scope Note element should be changed to a repeatable Note element; the best way to handle the hierarchical report for multi-hierarchies where portions of the hierarchy are repeated; and whether support for searching by term identifiers is redundant and unnecessary given that the terms themselves are unique within a thesaurus. For the gazetteer protocol, we continue to work on validation of query and report XML documents and on implementing the part of the protocol designed to support the submission of new entries to a gazetteer. We would like to encourage open discussion of these protocols through the NKOS discussion list (see the NKOS webpage at <http://nkos.slis.kent.edu/>) and the CGGR-L discussion list that focuses on gazetteer development (see ADL Gazetteer Development page at <http://www.alexandria.ucsb.edu/gazetteer>).
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Information science in transition (2009) 0.06
    0.06187723 = product of:
      0.12375446 = sum of:
        0.024757486 = weight(_text_:und in 1634) [ClassicSimilarity], result of:
          0.024757486 = score(doc=1634,freq=16.0), product of:
            0.14288108 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06442181 = queryNorm
            0.17327337 = fieldWeight in 1634, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=1634)
        0.098996975 = weight(_text_:handling in 1634) [ClassicSimilarity], result of:
          0.098996975 = score(doc=1634,freq=4.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.24500476 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.01953125 = fieldNorm(doc=1634)
      0.5 = coord(2/4)
    
    Abstract
    Are we at a turning point in digital information? The expansion of the internet was unprecedented; search engines dealt with it in the only way possible - scan as much as they could and throw it all into an inverted index. But now search engines are beginning to experiment with deep web searching and attention to taxonomies, and the semantic web is demonstrating how much more can be done with a computer if you give it knowledge. What does this mean for the skills and focus of the information science (or sciences) community? Should information designers and information managers work more closely to create computer based information systems for more effective retrieval? Will information science become part of computer science and does the rise of the term informatics demonstrate the convergence of information science and information technology - a convergence that must surely develop in the years to come? Issues and questions such as these are reflected in this monograph, a collection of essays written by some of the most pre-eminent contributors to the discipline. These peer reviewed perspectives capture insights into advances in, and facets of, information science, a profession in transition. With an introduction from Jack Meadows the key papers are: Meeting the challenge, by Brian Vickery; The developing foundations of information science, by David Bawden; The last 50 years of knowledge organization, by Stella G Dextre Clarke; On the history of evaluation in IR, by Stephen Robertson; The information user, by Tom Wilson A; The sociological turn in information science, by Blaise Cronin; From chemical documentation to chemoinformatics, by Peter Willett; Health informatics, by Peter A Bath; Social informatics and sociotechnical research, by Elisabeth Davenport; The evolution of visual information retrieval, by Peter Enser; Information policies, by Elizabeth Orna; Disparity in professional qualifications and progress in information handling, by Barry Mahon; Electronic scholarly publishing and open access, by Charles Oppenheim; Social software: fun and games, or business tools? by Wendy A Warr; and, Bibliometrics to webometrics, by Mike Thelwall. This monograph previously appeared as a special issue of the "Journal of Information Science", published by Sage. Reproduced here as a monograph, this important collection of perspectives on a skill set in transition from a prestigious line-up of authors will now be available to information studies students worldwide and to all those working in the information science field.
    Content
    Inhalt: Fifty years of UK research in information science - Jack Meadows / Smoother pebbles and the shoulders of giants: the developing foundations of information science - David Bawden / The last 50 years of knowledge organization: a journey through my personal archives - Stella G. Dextre Clarke / On the history of evaluation in IR - Stephen Robertson / The information user: past, present and future - Tom Wilson / The sociological turn in information science - Blaise Cronin / From chemical documentation to chemoinformatics: 50 years of chemical information science - Peter Willett / Health informatics: current issues and challenges - Peter A. Bath / Social informatics and sociotechnical research - a view from the UK - Elisabeth Davenport / The evolution of visual information retrieval - Peter Enser / Information policies: yesterday, today, tomorrow - Elizabeth Orna / The disparity in professional qualifications and progress in information handling: a European perspective - Barry Mahon / Electronic scholarly publishing and Open Access - Charles Oppenheim / Social software: fun and games, or business tools ? - Wendy A. Warr / Bibliometrics to webometrics - Mike Thelwall / How I learned to love the Brits - Eugene Garfield
    Footnote
    Rez. in: Mitt VÖB 62(2009) H.3, S.95-99 (O. Oberhauser): "Dieser ansehnliche Band versammelt 16 Beiträge und zwei Editorials, die bereits 2008 als Sonderheft des Journal of Information Science erschienen sind - damals aus Anlass des 50. Jahrestages der Gründung des seit 2002 nicht mehr selbständig existierenden Institute of Information Scientists (IIS). Allgemein gesprochen, reflektieren die Aufsätze den Stand der Informationswissenschaft (IW) damals, heute und im Verlauf dieser 50 Jahre, mit Schwerpunkt auf den Entwicklungen im Vereinigten Königreich. Bei den Autoren der Beiträge handelt es sich um etablierte und namhafte Vertreter der britischen Informationswissenschaft und -praxis - die einzige Ausnahme ist Eugene Garfield (USA), der den Band mit persönlichen Reminiszenzen beschließt. Mit der nunmehrigen Neuauflage dieser Kollektion als Hardcover-Publikation wollten Herausgeber und Verlag vor allem einen weiteren Leserkreis erreichen, aber auch den Bibliotheken, die die erwähnte Zeitschrift im Bestand haben, die Möglichkeit geben, das Werk zusätzlich als Monographie zur Aufstellung zu bringen. . . . Bleibt die Frage, ob eine neuerliche Publikation als Buch gerechtfertigt ist. Inhaltlich besticht der Band ohne jeden Zweifel. Jeder, der sich für Informationswissenschaft interessiert, wird von den hier vorzufindenden Texten profitieren. Und: Natürlich ist es praktisch, eine gediegene Buchpublikation in Händen zu halten, die in vielen Bibliotheken - im Gegensatz zum Zeitschriftenband - auch ausgeliehen werden kann. Alles andere ist eigentlich nur eine Frage des Budgets." Weitere Rez. in IWP 61(2010) H.2, S.148 (L. Weisel); JASIST 61(2010) no.7, S.1505 (M. Buckland); KO 38(2011) no.2, S.171-173 (P. Matthews): "Armed then with tools and techniques often applied to the structural analysis of other scientific fields, this volume frequently sees researchers turning this lens on themselves and ranges in tone from the playfully reflexive to the (parentally?) overprotective. What is in fact revealed is a rather disparate collection of research areas, all making a valuable contribution to our understanding of the nature of information. As is perhaps the tendency with overzealous lumpers (see http://en.wikipedia.org/wiki/Lumpers_and_splitters), some attempts to bring these areas together seem a little forced. The splitters help draw attention to quite distinct specialisms, IS's debts to other fields, and the ambition of some emerging subfields to take up intellectual mantles established elsewhere. In the end, the multidisciplinary nature of information science shines through. With regard to future directions, the subsumption of IS into computer science is regarded as in many ways inevitable, although there is consensus that the distinct infocentric philosophy and outlook which has evolved within IS is something to be retained." Weitere Rez. in: KO 39(2012) no.6, S.463-465 (P. Matthews)
    RSWK
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
    Subject
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
  9. Drabenstott, K.M.: Classification to the rescue : handling the problems of too many and too few retrievals (1996) 0.06
    0.059398185 = product of:
      0.23759274 = sum of:
        0.23759274 = weight(_text_:handling in 5232) [ClassicSimilarity], result of:
          0.23759274 = score(doc=5232,freq=4.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.58801144 = fieldWeight in 5232, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=5232)
      0.25 = coord(1/4)
    
    Abstract
    The first studies of online catalog use demonstrated that the problems of too many and too few retrievals plagued the earliest online catalog users. Despite 15 years of system development, implementation, and evaluation, these problems still adversely affect the subject searches of today's online catalog users. In fact, the large-retrievals problem has grown more acute due to the growth of online catalog databases. This paper explores the use of library classifications for consolidating and summarizing high-posted subject searches and for handling subject searches that result in no or too few retrievals. Findings are presented in the form of generalization about retrievals and library classifications, needed improvements to classification terminology, and suggestions for improved functionality to facilitate the display of retrieved titles in online catalogs
  10. Sitas, A.: ¬The classification of byzantine literature in the Library of Congress classification (2001) 0.06
    0.059398185 = product of:
      0.23759274 = sum of:
        0.23759274 = weight(_text_:handling in 957) [ClassicSimilarity], result of:
          0.23759274 = score(doc=957,freq=4.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.58801144 = fieldWeight in 957, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=957)
      0.25 = coord(1/4)
    
    Abstract
    Topics concerning the Classification of Byzantine literature and, generally, of Byzantine texts are discussed, analyzed and made clear. The time boundaries of this period are described as well as the kinds of published material. Schedule PA (Supplement) of the Library of Congress Classification is discussed and evaluated as far as the handling of Byzantine literature is concerned. Schedule PA is also mentioned, as well as other relevant categories. Based on the results regarding the manner of handling Classical literature texts, it is concluded that a) Early Christian literature and the Fathers of the Church must be excluded from Class PA and b) in order to achieve a uniform, continuous, consistent and reliable classification of Byzantine texts, they must be treated according to the method proposed for Classical literature by the Library of Congress in Schedule PA.
  11. Williamson, N.: ¬An interdisciplinary world and discipline based classification (1998) 0.06
    0.059398185 = product of:
      0.23759274 = sum of:
        0.23759274 = weight(_text_:handling in 1085) [ClassicSimilarity], result of:
          0.23759274 = score(doc=1085,freq=4.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.58801144 = fieldWeight in 1085, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=1085)
      0.25 = coord(1/4)
    
    Abstract
    The major classification systems continue to remain discipline based despite significant changes in the structure of knowledge. particularly in the latter part of the 20th century.While it would be desirable to have these systems replaced by systems in keeping with the changes, the probability of this happening in the near future is very slim indeed. Problems of handling interdisciplinarity among subjects in conjunction with existing systems are addressed.The nature of interdisciplinarity is defined and general problems discussed. Principles and methods of handling are examined andnew approaches to the problems are proposed. Experiments are currently being carried out to determine how some of the possibilities might be implemented in the existing systems. Experimental examples are under development. Efforts have been made to propose practical solutions and to suggest directions for further theoretical and experimental research
  12. Information ethics : privacy, property, and power (2005) 0.06
    0.059368238 = product of:
      0.118736476 = sum of:
        0.10635773 = weight(_text_:judge in 3392) [ClassicSimilarity], result of:
          0.10635773 = score(doc=3392,freq=2.0), product of:
            0.49805635 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.06442181 = queryNorm
            0.21354558 = fieldWeight in 3392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.01953125 = fieldNorm(doc=3392)
        0.012378743 = weight(_text_:und in 3392) [ClassicSimilarity], result of:
          0.012378743 = score(doc=3392,freq=4.0), product of:
            0.14288108 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06442181 = queryNorm
            0.086636685 = fieldWeight in 3392, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=3392)
      0.5 = coord(2/4)
    
    BK
    06.00 / Information und Dokumentation: Allgemeines
    Classification
    06.00 / Information und Dokumentation: Allgemeines
    Footnote
    Rez. in: JASIST 58(2007) no.2, S.302 (L.A. Ennis):"This is an important and timely anthology of articles "on the normative issues surrounding information control" (p. 11). Using an interdisciplinary approach, Moore's work takes a broad look at the relatively new field of information ethics. Covering a variety of disciplines including applied ethics, intellectual property, privacy, free speech, and more, the book provides information professionals of all kinds with a valuable and thought-provoking resource. Information Ethics is divided into five parts and twenty chapters or articles. At the end of each of the five parts, the editor has included a few "discussion cases," which allows the users to apply what they just read to potential real life examples. Part I, "An Ethical Framework for Analysis," provides readers with an introduction to reasoning and ethics. This complex and philosophical section of the book contains five articles and four discussion cases. All five of the articles are really thought provoking and challenging writings on morality. For instance, in the first article, "Introduction to Moral Reasoning," Tom Regan examines how not to answer a moral question. For example, he thinks using what the majority believes as a means of determining what is and is not moral is flawed. "The Metaphysics of Morals" by Immanuel Kant looks at the reasons behind actions. According to Kant, to be moral one has to do the right thing for the right reasons. By including materials that force the reader to think more broadly and deeply about what is right and wrong, Moore has provided an important foundation and backdrop for the rest of the book. Part II, "Intellectual Property: Moral and Legal Concerns," contains five articles and three discussion cases for tackling issues like ownership, patents, copyright, and biopiracy. This section takes a probing look at intellectual and intangible property from a variety of viewpoints. For instance, in "Intellectual Property is Still Property," Judge Frank Easterbrook argues that intellectual property is no different than physical property and should not be treated any differently by law. Tom Palmer's article, "Are Patents and Copyrights Morally Justified," however, uses historical examples to show how intellectual and physical properties differ.
  13. Ritzler, C.: Comparative study of PC-supported thesaurus software (1990) 0.06
    0.05600115 = product of:
      0.2240046 = sum of:
        0.2240046 = weight(_text_:handling in 2218) [ClassicSimilarity], result of:
          0.2240046 = score(doc=2218,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.5543825 = fieldWeight in 2218, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=2218)
      0.25 = coord(1/4)
    
    Abstract
    This article presents the results of a comparative study of three PC supported software packages (INDEX, PROTERM and TMS) for development, construction and management of thesauri and other word material with special regard to hardware and software requirements, handling and user interface, and functionality and reliability. Advantages and disadvantages are discussed. The result shows that all three software products comply with the minimum standards of a thesaurus software. After inclusion of additional features distinct differences become visible
  14. Cawkell, A.E.: Selected aspects of image processing and management : review and future prospects (1992) 0.06
    0.05600115 = product of:
      0.2240046 = sum of:
        0.2240046 = weight(_text_:handling in 2658) [ClassicSimilarity], result of:
          0.2240046 = score(doc=2658,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.5543825 = fieldWeight in 2658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=2658)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of techniques applied to various aspects of image processing in information systems including: indexing of images in electronic form (manual and computerised indexing and automatic indexing by content); image handling using microcomputers; and description of 8 British Library funded research projects. The article is based in a BLRD report
  15. Claassen, W.T.: Transparent hypermedia? (1992) 0.06
    0.05600115 = product of:
      0.2240046 = sum of:
        0.2240046 = weight(_text_:handling in 4263) [ClassicSimilarity], result of:
          0.2240046 = score(doc=4263,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.5543825 = fieldWeight in 4263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=4263)
      0.25 = coord(1/4)
    
    Abstract
    Considers why the use of hypermedia has not been more widely accepted and applied in practice, given that it is such a powerful information handling technique and it has been commercially available for 5 years. Argues that hypermedia is not sufficiently open or transparent to users, enabling them to find relevant information relatively easily and at a high level of sophistication. Suggests that a higher degree of transparency can be obtained by taking into account a variety of issues which can best be accomodated by the designation information ecology
  16. Buckland, M.K.: Information retrieval of more than text (1991) 0.06
    0.05600115 = product of:
      0.2240046 = sum of:
        0.2240046 = weight(_text_:handling in 4813) [ClassicSimilarity], result of:
          0.2240046 = score(doc=4813,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.5543825 = fieldWeight in 4813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=4813)
      0.25 = coord(1/4)
    
    Abstract
    In the past information retrieval has been primarily concerned with text and text-like data. Image-handling is considered as a form of image retrieval and considers the pioneering work of Paul Otlet and Suzanne Briet. Concludes that the terminology of multimedia needs attention to distinguish phenomena, facts, representations, forms of expression, and physical medium
  17. Horne, E.E.: ¬An investigation into selfquestioning behaviour during problem-solving (1990) 0.06
    0.05600115 = product of:
      0.2240046 = sum of:
        0.2240046 = weight(_text_:handling in 4863) [ClassicSimilarity], result of:
          0.2240046 = score(doc=4863,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.5543825 = fieldWeight in 4863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=4863)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of the research was to investigate self-questioning behaviour within the context of a problem-solving situation in order to better understand the users' handling of their information needs. An experiment was designed to yield data to test the hypothesis that probelm-solving activity is reflected as successful and unsuccessful questioning behaviour. 2 different behaviours in questioning emerged to support the hypothesis
  18. Raeder, A.: Library Master for databases and bibliographies (1991) 0.06
    0.05600115 = product of:
      0.2240046 = sum of:
        0.2240046 = weight(_text_:handling in 4928) [ClassicSimilarity], result of:
          0.2240046 = score(doc=4928,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.5543825 = fieldWeight in 4928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=4928)
      0.25 = coord(1/4)
    
    Abstract
    Describes the facilities provided by Library Master version 1.2 software for handling personal databases of retrieval bibliographic citations and for generating indexed and processed bibliographies. Library Master is designed to allow records to be downloaded from online databases or online catalogues and the resulting collection may be converted into a separate, specialised online or CD-ROM database. A local area network (LAN) version of Library Master is in process of development
  19. Conturbia, S.D.: Who catalogs foreign-language materials? : a survey of ARL libraries (1992) 0.06
    0.05600115 = product of:
      0.2240046 = sum of:
        0.2240046 = weight(_text_:handling in 5058) [ClassicSimilarity], result of:
          0.2240046 = score(doc=5058,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.5543825 = fieldWeight in 5058, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=5058)
      0.25 = coord(1/4)
    
    Abstract
    Presents results of a survey of cataloguers of foreign language materials in North American libraries. In Spring 1991, a questionnaire was sent to the heads of cataloguing department of the members of the Association of Research Libraries (ARL) in order to examine their criteria in selcting and hiring cataloguers of foreign language materials, and to assess the present status of cataloguing backlogs. 80 libraries participated in the survey and provided suggestions on handling the problem of cataloguing backlogs of foreign language materials
  20. Gilbert, S.K.: SGML theory and practice (1989) 0.06
    0.05600115 = product of:
      0.2240046 = sum of:
        0.2240046 = weight(_text_:handling in 5943) [ClassicSimilarity], result of:
          0.2240046 = score(doc=5943,freq=2.0), product of:
            0.40406144 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.06442181 = queryNorm
            0.5543825 = fieldWeight in 5943, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=5943)
      0.25 = coord(1/4)
    
    Abstract
    Provides information for people who want (or need) to know what the SGML is and want to make use of it. Gives a fairly detailed description of what SGML is, why it exists, and provides a list of SGML players who are actively involved in either developing tools, providing services, offering consultancy or enganging in research for SGML. Describes the SGML work undertaken at Hatfield Polytechnic as part of Project Quartet funded by the British Library Research and Development Dept. The results and findings conclude that SGML forms a strong backbone for present and future document handling systems

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 946
  • m 315
  • s 103
  • el 102
  • i 22
  • n 17
  • r 16
  • x 14
  • b 7
  • ? 1
  • p 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications