-
Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010)
0.06
0.055446703 = product of:
0.22178681 = sum of:
0.22178681 = weight(_text_:java in 3605) [ClassicSimilarity], result of:
0.22178681 = score(doc=3605,freq=2.0), product of:
0.47472697 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.067360975 = queryNorm
0.46718815 = fieldWeight in 3605, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3605)
0.25 = coord(1/4)
- Abstract
- For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
-
Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017)
0.06
0.055446703 = product of:
0.22178681 = sum of:
0.22178681 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
0.22178681 = score(doc=4615,freq=2.0), product of:
0.47472697 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.067360975 = queryNorm
0.46718815 = fieldWeight in 4615, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=4615)
0.25 = coord(1/4)
- Abstract
- Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
-
Newell, R.: Geographic information systems : where are we? and where do we go from here? (1994)
0.05
0.053724565 = product of:
0.21489826 = sum of:
0.21489826 = weight(_text_:here in 6710) [ClassicSimilarity], result of:
0.21489826 = score(doc=6710,freq=2.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.5936969 = fieldWeight in 6710, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.078125 = fieldNorm(doc=6710)
0.25 = coord(1/4)
-
Bradley, D.; Frederick, J.: ¬The Clinton Electronic Communications Project : an experiment in electronic democracy (1994)
0.05
0.053724565 = product of:
0.21489826 = sum of:
0.21489826 = weight(_text_:here in 8330) [ClassicSimilarity], result of:
0.21489826 = score(doc=8330,freq=2.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.5936969 = fieldWeight in 8330, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.078125 = fieldNorm(doc=8330)
0.25 = coord(1/4)
- Abstract
- The Clinton Electronic Communications Project is the successor to the Clinton campaign's e-mail programme. The object of the work reported here was to determine the degree to which White House material posted to the Internet is more current and comprehensive than information available through more traditional sources
-
Jörgensen, C.; Liddy, E.D.: Information access or information anxiety? : an explanatory evaluation of book index features (1996)
0.05
0.053724565 = product of:
0.21489826 = sum of:
0.21489826 = weight(_text_:here in 6923) [ClassicSimilarity], result of:
0.21489826 = score(doc=6923,freq=2.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.5936969 = fieldWeight in 6923, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.078125 = fieldNorm(doc=6923)
0.25 = coord(1/4)
- Abstract
- The authors conducted a controlled user study in both print and electronic environments and present here a subset of results from index use in the print format
-
Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012)
0.05
0.053724565 = product of:
0.21489826 = sum of:
0.21489826 = weight(_text_:here in 3861) [ClassicSimilarity], result of:
0.21489826 = score(doc=3861,freq=8.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.5936969 = fieldWeight in 3861, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.0390625 = fieldNorm(doc=3861)
0.25 = coord(1/4)
- Abstract
- Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
-
Fuller, M.: Media ecologies : materialist energies in art and technoculture (2005)
0.05
0.053334456 = product of:
0.10666891 = sum of:
0.020709611 = weight(_text_:und in 1469) [ClassicSimilarity], result of:
0.020709611 = score(doc=1469,freq=16.0), product of:
0.14939985 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.067360975 = queryNorm
0.1386187 = fieldWeight in 1469, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.015625 = fieldNorm(doc=1469)
0.0859593 = weight(_text_:here in 1469) [ClassicSimilarity], result of:
0.0859593 = score(doc=1469,freq=8.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.23747875 = fieldWeight in 1469, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.015625 = fieldNorm(doc=1469)
0.5 = coord(2/4)
- Abstract
- In Media Ecologies, Matthew Fuller asks what happens when media systems interact. Complex objects such as media systems - understood here as processes, or elements in a composition as much as "things" - have become informational as much as physical, but without losing any of their fundamental materiality. Fuller looks at this multiplicitous materiality - how it can be sensed, made use of, and how it makes other possibilities tangible. He investigates the ways the different qualities in media systems can be said to mix and interrelate, and, as he writes, "to produce patterns, dangers, and potentials." Fuller draws on texts by Felix Guattari (and his "serial collaborator" Gilles Deleuze) as well as writings by Friedrich Nietzsche, Marshall McLuhan, Donna Haraway, Friedrich Kittler, and others, to define and extend the idea of "media ecology." Arguing that the only way to find out about what happens when media systems interact is to carry out such interactions, Fuller traces a series of media ecologies - "taking every path in a labyrinth simultaneously," as he describes one chapter. He looks at contemporary London-based pirate radio and its interweaving of high- and low-tech media systems; the "medial will to power" illustrated by "the camera that ate itself"; how, as seen in a range of compelling interpretations of new media works, the capacities and behaviors of media objects are affected when they are in "abnormal" relationships with other objects; and each step in a sequence of Web pages, "Cctv - world wide watch," that encourages viewers to report crimes seen via webcams. Contributing to debates around standardisation, cultural evolution, cybernetic culture, and surveillance, and inventing a politically challenging aesthetic that links them, Media Ecologies, with its various narrative speeds, scales, frames of references, and voices, does not offer the academically traditional unifying framework; rather, Fuller says, it proposes to capture "an explosion of activity and ideas to which it hopes to add an echo."
- Classification
- AP 13550 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Theorie und Methodik / Grundlagen, Methodik, Theorie
AP 13550 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Theorie und Methodik / Grundlagen, Methodik, Theorie
- Footnote
- Rez. in: JASIST 58(2007) no.8, S.1222 (P.K. Nayar): "Media ecology is the intersection of information and communications technology (ICTs), organizational behavior, and human interaction. Technology, especially ICT, is the environment of human culture today-from individuals to organizations, in metropolises across the world. Fuller defines media ecology as "the allocation of informational roles in organizations and in computer-supported collaborative work" (p. 3), a fairly comprehensive definition. Fuller opens with a study of a pirate radio in London. Adapting thinkers on media and culture-Stuart Hall, J. F. Gibson's ecological psychology, Deleuze and Guattari figure prominently here. Exploring the attempted regulation of radio, the dissemination into multiple "forms," and the structures that facilitate this, Fuller presents the environment in which "subversive" radio broadcasts take place. Marketing and voices, microphones, and language codes all begin to interact with each other to form a higher order of a material or "machinic" universe (Fuller here adapts Deleuze and Guattari's concept of a "machinic phylum" defined as "materiality, natural or artificial, and both simultaneously; it is matter in movement, in flux, in variation, matter as a conveyer of singularities and traits of expression," p. 17). Using hip-hop as a case study, Fuller argues that digitized sound transforms the voice from indexical to the "rhythmatic." Music becomes fundamentally synthetic here (p. 31), and acquires the potential to access a greater space of embodiment. Other factors, often ignored in media studies, include the role of the DJs (disk jockies), are worked into a holistic account. The DJ, notes Fuller is a switch for the pirate station, but is also a creator of hype. Storing, transposing, organizing time, the DJ is a crucial element in the informational ecology of the radio station. Fuller argues that "things" like the mobile phone must be treated as media assemblages. Pirate radio is an example of the minoritarian use of media systems, according to Fuller.
- RVK
- AP 13550 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Theorie und Methodik / Grundlagen, Methodik, Theorie
AP 13550 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Theorie und Methodik / Grundlagen, Methodik, Theorie
-
Keeler, M.: Pragmatically yours, (2000)
0.05
0.053184606 = product of:
0.21273842 = sum of:
0.21273842 = weight(_text_:here in 6072) [ClassicSimilarity], result of:
0.21273842 = score(doc=6072,freq=4.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.58772993 = fieldWeight in 6072, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.0546875 = fieldNorm(doc=6072)
0.25 = coord(1/4)
- Abstract
- Here is my attempt to find the roots of "the ontological problem" in Al research identified by Daniel Kayser at ICCS'98. I reconstruct just enough of C.S. Peirce's "scientific philosophy" to suggest how pragmatism responds to fundamental (metaphysical) issues in Knowledge Representation, and to indicate how Kayser's notion of "variable ontology" for "conceptual adaptation" might be interpreted as pragmatic ontology, a model methodology for Conceptual Structures research and development. At least, here may be an introduction to further investigations?
-
Sokal, A.: Transgressing the boundaries : toward a transformative hermeneutics of quantum gravity (1996)
0.05
0.05093251 = product of:
0.10186502 = sum of:
0.025887014 = weight(_text_:und in 3136) [ClassicSimilarity], result of:
0.025887014 = score(doc=3136,freq=16.0), product of:
0.14939985 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.067360975 = queryNorm
0.17327337 = fieldWeight in 3136, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.01953125 = fieldNorm(doc=3136)
0.07597801 = weight(_text_:here in 3136) [ClassicSimilarity], result of:
0.07597801 = score(doc=3136,freq=4.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.20990355 = fieldWeight in 3136, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.01953125 = fieldNorm(doc=3136)
0.5 = coord(2/4)
- Abstract
- Here my aim is to carry these deep analyses one step farther, by taking account of recent developments in quantum gravity: the emerging branch of physics in which Heisenberg's quantum mechanics and Einstein's general relativity are at once synthesized and superseded. In quantum gravity, as we shall see, the space-time manifold ceases to exist as an objective physical reality; geometry becomes relational and contextual; and the foundational conceptual categories of prior science -- among them, existence itself -- become problematized and relativized. This conceptual revolution, I will argue, has profound implications for the content of a future postmodern and liberatory science. My approach will be as follows: First I will review very briefly some of the philosophical and ideological issues raised by quantum mechanics and by classical general relativity. Next I will sketch the outlines of the emerging theory of quantum gravity, and discuss some of the conceptual issues it raises. Finally, I will comment on the cultural and political implications of these scientific developments. It should be emphasized that this article is of necessity tentative and preliminary; I do not pretend to answer all of the questions that I raise. My aim is, rather, to draw the attention of readers to these important developments in physical science, and to sketch as best I can their philosophical and political implications. I have endeavored here to keep mathematics to a bare minimum; but I have taken care to provide references where interested readers can find all requisite details.
- Content
- 1996 reichte der amerikanische Physiker Alan Sokal einen Aufsatz mit dem Titel Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity (deutsch: Die Grenzen überschreiten: Auf dem Weg zu einer transformativen Hermeneutik der Quantengravitation) bei der amerikanischen, für ihre postmoderne Ausrichtung bekannten Zeitschrift für Cultural studies Social Text zur Veröffentlichung ein. Diese druckte ihn unbeanstandet mit anderen in einer Sondernummer ab. Kurz nach der Veröffentlichung bekannte Sokal in einer anderen Zeitschrift, Lingua Franca, dass es sich bei dem Aufsatz um eine Parodie handle. Er habe die zusammengesuchten Zitate verschiedener postmoderner Denker mit dem typischen Jargon dieser Denkrichtung zu einem Text montiert, dessen unsinniger Inhalt bei Beachtung wissenschaftlicher Standards, so der Vorwurf an die Herausgeber von Social Text, als solcher hätte erkannt werden müssen. Dieser Vorfall löste im akademischen Milieu und der Presse (der Fall kam immerhin bis auf die Titelseite der New York Times) eine öffentliche Diskussion aus, wie dieser Vorfall im Besonderen und die Seriosität der postmodernen Philosophie im Allgemeinen zu bewerten sei. Sokal und Vertreter des kritisierten Personenkreises führten die Diskussion in weiteren Zeitschriftenartikeln fort und verteidigten ihre Standpunkte. 1997 veröffentlichte Sokal zusammen mit seinem belgischen Kollegen Jean Bricmont dazu ein Buch mit dem Titel Impostures Intellectuelles (übersetzt: Intellektuelle Hochstapeleien, deutscher Titel: Eleganter Unsinn), in dem er seine Thesen erklärt und an Beispielen von Texten bedeutender postmoderner französischer Philosophen erläutert (namentlich Jean Baudrillard, Gilles Deleuze/Félix Guattari, Luce Irigaray, Julia Kristeva, Jacques Lacan, Bruno Latour und Paul Virilio und - obwohl kein Postmoderner, als historisches Beispiel - Henri Bergson). In diesem Buch gaben Sokal/Bricmont - neben der Verteidigung gegen den vermuteten Missbrauch der Wissenschaft - auch ein politisches Motiv für ihren Vorstoß an. Sie bekannten sich zur politischen Linken und vertraten die Meinung, dass die zunehmende Verbreitung der postmodernen Denkrichtung in der Linken deren Fähigkeit zu wirkungsvoller Gesellschaftskritik schwäche. (http://de.wikipedia.org/wiki/Sokal-Aff%C3%A4re)
-
DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021)
0.05
0.050447866 = product of:
0.10089573 = sum of:
0.036426257 = weight(_text_:und in 1406) [ClassicSimilarity], result of:
0.036426257 = score(doc=1406,freq=22.0), product of:
0.14939985 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.067360975 = queryNorm
0.24381724 = fieldWeight in 1406, product of:
4.690416 = tf(freq=22.0), with freq of:
22.0 = termFreq=22.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0234375 = fieldNorm(doc=1406)
0.06446948 = weight(_text_:here in 1406) [ClassicSimilarity], result of:
0.06446948 = score(doc=1406,freq=2.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.17810906 = fieldWeight in 1406, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.0234375 = fieldNorm(doc=1406)
0.5 = coord(2/4)
- Abstract
- Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
- Footnote
- Vgl. auch: Rötzer, F.: Warum schrumpft das Gehirn des Menschen seit ein paar Tausend Jahren? Unter: https://krass-und-konkret.de/wissenschaft-technik/warum-schrumpft-das-gehirn-des-menschen-seit-ein-paar-tausend-jahren/. "... seit einigen tausend Jahren - manche sagen seit 10.000 Jahren -, also nach dem Beginn der Landwirtschaft, der Sesshaftigkeit und der Stadtgründungen sowie der Erfindung der Schrift schrumpfte das menschliche Gehirn überraschenderweise wieder. ... Allgemein wird davon ausgegangen, dass mit den ersten Werkzeugen und vor allem beginnend mit der Erfindung der Schrift kognitive Funktionen, vor allem das Gedächtnis externalisiert wurden, allerdings um den Preis, neue Kapazitäten entwickeln zu müssen, beispielsweise Lesen und Schreiben. Gedächtnis beinhaltet individuelle Erfahrungen, aber auch kollektives Wissen, an dem alle Mitglieder einer Gemeinschaft mitwirken und in das das Wissen sowie die Erfahrungen der Vorfahren eingeschrieben sind. Im digitalen Zeitalter ist die Externalisierung und Entlastung der Gehirne noch sehr viel weitgehender, weil etwa mit KI nicht nur Wissensinhalte, sondern auch kognitive Fähigkeiten wie das Suchen, Sammeln, Analysieren und Auswerten von Informationen zur Entscheidungsfindung externalisiert werden, während die externalisierten Gehirne wie das Internet kollektiv in Echtzeit lernen und sich erweitern. Über Neuimplantate könnten schließlich Menschen direkt an die externalisierten Gehirne angeschlossen werden, aber auch direkt ihre kognitiven Kapazitäten erweitern, indem Prothesen, neue Sensoren oder Maschinen/Roboter auch in der Ferne in den ergänzten Körper der Gehirne aufgenommen werden.
Die Wissenschaftler sehen diese Entwicklungen im Hintergrund, wollen aber über einen Vergleich mit der Hirnentwicklung bei Ameisen erklären, warum heutige Menschen kleinere Gehirne als ihre Vorfahren vor 100.000 Jahren entwickelt haben. Der Rückgang der Gehirngröße könnte, so die Hypothese, "aus der Externalisierung von Wissen und den Vorteilen der Entscheidungsfindung auf Gruppenebene resultieren, was zum Teil auf das Aufkommen sozialer Systeme der verteilten Kognition und der Speicherung und Weitergabe von Informationen zurückzuführen ist"."
-
Shiri, A.A.; Revie, C.; Chowdhurry, G.: Assessing the impact of user interaction with thesaural knowledge structures : a quantitative analysis framework (2003)
0.05
0.050301604 = product of:
0.10060321 = sum of:
0.014643907 = weight(_text_:und in 3766) [ClassicSimilarity], result of:
0.014643907 = score(doc=3766,freq=2.0), product of:
0.14939985 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.067360975 = queryNorm
0.098018214 = fieldWeight in 3766, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.03125 = fieldNorm(doc=3766)
0.0859593 = weight(_text_:here in 3766) [ClassicSimilarity], result of:
0.0859593 = score(doc=3766,freq=2.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.23747875 = fieldWeight in 3766, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.03125 = fieldNorm(doc=3766)
0.5 = coord(2/4)
- Abstract
- Thesauri have been important information and knowledge organisation tools for more than three decades. The recent emergence and phenomenal growth of the World Wide Web has created new opportunities to introduce thesauri as information search and retrieval aids to end user communities. While the number of web-based and hypertextual thesauri continues to grow, few investigations have yet been carried out to evaluate how end-users, for whom all these efforts are ostensibly made, interact with and make use of thesauri for query building and expansion. The present paper reports a pilot study carried out to determine the extent to which a thesaurus-enhanced search interface to a web-based database aided end-users in their selection of search terms. The study also investigated the ways in which users interacted with the thesaurus structure, terms, and interface. Thesaurusbased searching and browsing behaviours adopted by users while interacting with the thesaurus-enhanced search interface were also examined. 1. Introduction The last decade has witnessed the emergence of a broad range of applications for knowledge structures in general and thesauri in particular. A number of researchers have predicted that thesauri will increasingly be used in retrieval rather than for indexing (Milstead, 1998; Aitchison et al., 1997) and that their application in information retrieval systems will become more diverse due to the growth of fulltext databases accessed over the Internet (Williamson, 2000). Some researchers have emphasised the need for tailoring the structure and content of thesauri as tools for end-user searching (Bates, 1986; Strong and Drott, 1986; Anderson and Rowley, 1991; Lopez-Huertas, 1997) while others have suggested thesaurus-enhanced user interfaces to support query formulation and expansion (Pollitt et.al., 1994; Jones et.al., 1995; Beaulieu, 1997). The recent phenomenal growth of the World Wide Web has created new opportunities to introduce thesauri as information search and retrieval aids to end user communities. While the number of web-based and hypertextual thesauri continues to grow, few investigations have been carried out to evaluate the ways in which end-users interact with and make use of online thesauri for query building and expansion. The work reported here expands an a pilot study (Shiri and Revie, 2001) carried out to investigate user - thesaurus interaction in the domains of biology and veterinary medicine.
- Theme
- Konzeption und Anwendung des Prinzips Thesaurus
-
Egghe, L.: Empirical and combinatorial study of country occurrences in multi-authored papers (2006)
0.05
0.050301604 = product of:
0.10060321 = sum of:
0.014643907 = weight(_text_:und in 206) [ClassicSimilarity], result of:
0.014643907 = score(doc=206,freq=2.0), product of:
0.14939985 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.067360975 = queryNorm
0.098018214 = fieldWeight in 206, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.03125 = fieldNorm(doc=206)
0.0859593 = weight(_text_:here in 206) [ClassicSimilarity], result of:
0.0859593 = score(doc=206,freq=2.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.23747875 = fieldWeight in 206, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.03125 = fieldNorm(doc=206)
0.5 = coord(2/4)
- Abstract
- Papers written by several authors can be classified according to the countries of the author affiliations. The empirical part of this paper consists of two datasets. One dataset consists of 1,035 papers retrieved via the search "pedagog*" in the years 2004 and 2005 (up to October) in Academic Search Elite which is a case where phi(m) = the number of papers with m =1, 2,3 ... authors is decreasing, hence most of the papers have a low number of authors. Here we find that #, m = the number of times a country occurs j times in a m-authored paper, j =1, ..., m-1 is decreasing and that # m, m is much higher than all the other #j, m values. The other dataset consists of 3,271 papers retrieved via the search "enzyme" in the year 2005 (up to October) in the same database which is a case of a non-decreasing phi(m): most papers have 3 or 4 authors and we even find many papers with a much higher number of authors. In this case we show again that # m, m is much higher than the other #j, m values but that #j, m is not decreasing anymore in j =1, ..., m-1, although #1, m is (apart from # m, m) the largest number amongst the #j,m. The combinatorial part gives a proof of the fact that #j,m decreases for j = 1, m-1, supposing that all cases are equally possible. This shows that the first dataset is more conform with this model than the second dataset. Explanations for these findings are given. From the data we also find the (we think: new) distribution of number of papers with n =1, 2,3,... countries (i.e. where there are n different countries involved amongst the m (a n) authors of a paper): a fast decreasing function e.g. as a power law with a very large Lotka exponent.
- Source
- Information - Wissenschaft und Praxis. 57(2006) H.8, S.427-432
-
Willis, C.; Losee, R.M.: ¬A random walk on an ontology : using thesaurus structure for automatic subject indexing (2013)
0.05
0.050301604 = product of:
0.10060321 = sum of:
0.014643907 = weight(_text_:und in 2016) [ClassicSimilarity], result of:
0.014643907 = score(doc=2016,freq=2.0), product of:
0.14939985 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.067360975 = queryNorm
0.098018214 = fieldWeight in 2016, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.03125 = fieldNorm(doc=2016)
0.0859593 = weight(_text_:here in 2016) [ClassicSimilarity], result of:
0.0859593 = score(doc=2016,freq=2.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.23747875 = fieldWeight in 2016, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.03125 = fieldNorm(doc=2016)
0.5 = coord(2/4)
- Abstract
- Relationships between terms and features are an essential component of thesauri, ontologies, and a range of controlled vocabularies. In this article, we describe ways to identify important concepts in documents using the relationships in a thesaurus or other vocabulary structures. We introduce a methodology for the analysis and modeling of the indexing process based on a weighted random walk algorithm. The primary goal of this research is the analysis of the contribution of thesaurus structure to the indexing process. The resulting models are evaluated in the context of automatic subject indexing using four collections of documents pre-indexed with 4 different thesauri (AGROVOC [UN Food and Agriculture Organization], high-energy physics taxonomy [HEP], National Agricultural Library Thesaurus [NALT], and medical subject headings [MeSH]). We also introduce a thesaurus-centric matching algorithm intended to improve the quality of candidate concepts. In all cases, the weighted random walk improves automatic indexing performance over matching alone with an increase in average precision (AP) of 9% for HEP, 11% for MeSH, 35% for NALT, and 37% for AGROVOC. The results of the analysis support our hypothesis that subject indexing is in part a browsing process, and that using the vocabulary and its structure in a thesaurus contributes to the indexing process. The amount that the vocabulary structure contributes was found to differ among the 4 thesauri, possibly due to the vocabulary used in the corresponding thesauri and the structural relationships between the terms. Each of the thesauri and the manual indexing associated with it is characterized using the methods developed here.
- Theme
- Konzeption und Anwendung des Prinzips Thesaurus
-
Veltman, K.H.: From Recorded World to Recording Worlds (2007)
0.05
0.04870394 = product of:
0.09740788 = sum of:
0.022193493 = weight(_text_:und in 1512) [ClassicSimilarity], result of:
0.022193493 = score(doc=1512,freq=6.0), product of:
0.14939985 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.067360975 = queryNorm
0.14855097 = fieldWeight in 1512, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.02734375 = fieldNorm(doc=1512)
0.075214386 = weight(_text_:here in 1512) [ClassicSimilarity], result of:
0.075214386 = score(doc=1512,freq=2.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.2077939 = fieldWeight in 1512, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.02734375 = fieldNorm(doc=1512)
0.5 = coord(2/4)
- Abstract
- The range, depths and limits of what we know depend on the media with which we attempt to record our knowledge. This essay begins with a brief review of developments in a) media: stone, manuscripts, books and digital media, to trace how collections of recorded knowledge expanded to 235,000 in 1837 and have expanded to over 100 million unique titles in a single database including over 1 billion individual listings in 2007. The advent of digital media has brought full text scanning and electronic networks, which enable us to consult digital books and images from our office, home or potentially even with our cell phones. These magnificent developments raise a number of concerns and new challenges. An historical survey of major projects that changed the world reveals that they have taken from one to eight centuries. This helps explain why commercial offerings, which offer useful, and even profitable short-term solutions often undermine a long-term vision. New technologies have the potential to transform our approach to knowledge, but require a vision of a systematic new approach to knowledge. This paper outlines four ingredients for such a vision in the European context. First, the scope of European observatories should be expanded to inform memory institutions of latest technological developments. Second, the quest for a European Digital Library should be expanded to include a distributed repository, a digital reference room and a virtual agora, whereby memory institutions will be linked with current research;. Third, there is need for an institute on Knowledge Organization that takes up anew Otlet's vision, and the pioneering efforts of the Mundaneum (Brussels) and the Bridge (Berlin). Fourth, we need to explore requirements for a Universal Digital Library, which works with countries around the world rather than simply imposing on them an external system. Here, the efforts of the proposed European University of Culture could be useful. Ultimately we need new systems, which open research into multiple ways of knowing, multiple "knowledges". In the past, we went to libraries to study the recorded world. In a world where cameras and sensors are omnipresent we have new recording worlds. In future, we may also use these recording worlds to study the riches of libraries.
- Content
- Vgl. Hinweis in: Online-Mitteilungen 2007, Nr.91 [=Mitt. VOEB 60(2007) H.3], S.15: "Auf der Tagung "Herausforderung: Digitale Langzeitarchivierung - Strategien und Praxis europäischer Kooperation" welche vom 20. bis 21. April 2007 in der Deutschen Nationalbibliothek (Frankfurt am Main) stattfand, befassten sich die einzelnen Referentinnen nicht nur mit der Bewahrung des Kulturgutes, sondern u.a. auch mit der "Aufzeichnung der Welten". Wie man diese "Weltaufzeichnung" in Anbetracht der Fülle und stetigen Zunahme an Informationen zukünftig (noch) besser bewältigen kann, thematisierte Kim H. Veltman in seinem Vortrag. Er präsentierte dazu vier äußerst denkwürdige Ansätze: - Schaffung einerzentralen europäischen Instanz, welche die Gedächtnisinstitutionen über die neusten technologischen Entwicklungen informiert - Errichtung eines digitalen Referenzraums und einer virtuellen Agora innerhalb der Europäischen Digitalen Bibliothek - Gründung eines Instituts zur Wissensorganisation - Erforschen der Anforderungen für eine "Universal Digital Library"."
-
Zitt, M.; Lelu, A.; Bassecoulard, E.: Hybrid citation-word representations in science mapping : Portolan charts of research fields? (2011)
0.05
0.046526838 = product of:
0.18610735 = sum of:
0.18610735 = weight(_text_:here in 130) [ClassicSimilarity], result of:
0.18610735 = score(doc=130,freq=6.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.5141566 = fieldWeight in 130, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.0390625 = fieldNorm(doc=130)
0.25 = coord(1/4)
- Abstract
- The mapping of scientific fields, based on principles established in the seventies, has recently shown a remarkable development and applications are now booming with progress in computing efficiency. We examine here the convergence of two thematic mapping approaches, citation-based and word-based, which rely on quite different sociological backgrounds. A corpus in the nanoscience field was broken down into research themes, using the same clustering technique on the 2 networks separately. The tool for comparison is the table of intersections of the M clusters (here M=50) built on either side. A classical visual exploitation of such contingency tables is based on correspondence analysis. We investigate a rearrangement of the intersection table (block modeling), resulting in pseudo-map. The interest of this representation for confronting the two breakdowns is discussed. The amount of convergence found is, in our view, a strong argument in favor of the reliability of bibliometric mapping. However, the outcomes are not convergent at the degree where they can be substituted for each other. Differences highlight the complementarity between approaches based on different networks. In contrast with the strong informetric posture found in recent literature, where lexical and citation markers are considered as miscible tokens, the framework proposed here does not mix the two elements at an early stage, in compliance with their contrasted logic.
-
Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998)
0.05
0.046205588 = product of:
0.18482235 = sum of:
0.18482235 = weight(_text_:java in 1871) [ClassicSimilarity], result of:
0.18482235 = score(doc=1871,freq=2.0), product of:
0.47472697 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.067360975 = queryNorm
0.38932347 = fieldWeight in 1871, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=1871)
0.25 = coord(1/4)
- Abstract
- As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
-
Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006)
0.05
0.046205588 = product of:
0.18482235 = sum of:
0.18482235 = weight(_text_:java in 272) [ClassicSimilarity], result of:
0.18482235 = score(doc=272,freq=2.0), product of:
0.47472697 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.067360975 = queryNorm
0.38932347 = fieldWeight in 272, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=272)
0.25 = coord(1/4)
- Abstract
- This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
-
Eddings, J.: How the Internet works (1994)
0.05
0.046205588 = product of:
0.18482235 = sum of:
0.18482235 = weight(_text_:java in 2514) [ClassicSimilarity], result of:
0.18482235 = score(doc=2514,freq=2.0), product of:
0.47472697 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.067360975 = queryNorm
0.38932347 = fieldWeight in 2514, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=2514)
0.25 = coord(1/4)
- Abstract
- How the Internet Works promises "an exciting visual journey down the highways and byways of the Internet," and it delivers. The book's high quality graphics and simple, succinct text make it the ideal book for beginners; however it still has much to offer for Net vets. This book is jam- packed with cool ways to visualize how the Net works. The first section visually explores how TCP/IP, Winsock, and other Net connectivity mysteries work. This section also helps you understand how e-mail addresses and domains work, what file types mean, and how information travels across the Net. Part 2 unravels the Net's underlying architecture, including good information on how routers work and what is meant by client/server architecture. The third section covers your own connection to the Net through an Internet Service Provider (ISP), and how ISDN, cable modems, and Web TV work. Part 4 discusses e-mail, spam, newsgroups, Internet Relay Chat (IRC), and Net phone calls. In part 5, you'll find out how other Net tools, such as gopher, telnet, WAIS, and FTP, can enhance your Net experience. The sixth section takes on the World Wide Web, including everything from how HTML works to image maps and forms. Part 7 looks at other Web features such as push technology, Java, ActiveX, and CGI scripting, while part 8 deals with multimedia on the Net. Part 9 shows you what intranets are and covers groupware, and shopping and searching the Net. The book wraps up with part 10, a chapter on Net security that covers firewalls, viruses, cookies, and other Web tracking devices, plus cryptography and parental controls.
-
Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016)
0.05
0.046205588 = product of:
0.18482235 = sum of:
0.18482235 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
0.18482235 = score(doc=4179,freq=2.0), product of:
0.47472697 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.067360975 = queryNorm
0.38932347 = fieldWeight in 4179, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=4179)
0.25 = coord(1/4)
- Abstract
- In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
-
Salet Ferreira Novellino, M.: Information transfer considering the production and use contexts : information retrieval languages (1998)
0.05
0.045586802 = product of:
0.18234721 = sum of:
0.18234721 = weight(_text_:here in 1147) [ClassicSimilarity], result of:
0.18234721 = score(doc=1147,freq=4.0), product of:
0.36196628 = queryWeight, product of:
5.373531 = idf(docFreq=559, maxDocs=44421)
0.067360975 = queryNorm
0.5037685 = fieldWeight in 1147, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
5.373531 = idf(docFreq=559, maxDocs=44421)
0.046875 = fieldNorm(doc=1147)
0.25 = coord(1/4)
- Abstract
- Information transfer languages (ITLs) are languages of representation and retrieval of information production and use contexts to be used in digital library environments. Information transfer is defined here not only as a technical act but as a social act too, prevailing not the relationship among information system, document and user but the one between subjects that produce and use information. The justification for the construction of the ITL is that only thematic indication does not enable the user to achieve relevant information. The way seen to solve this problem is to relate the document properties with their production conditions and with their possible practical applications. It is acknowledged here that the document producer has certain communication intentions, in accordance to his social activities; and the document user, has information necessities in accordance to his action context. My thesis is that making these communication intentions readable to the users, they will be able to choose the information set most useful to their praxis