-
Johnson, K.E.: OPAC missing record retrieval (1996)
0.05
0.05396261 = product of:
0.21585044 = sum of:
0.21585044 = weight(_text_:held in 6803) [ClassicSimilarity], result of:
0.21585044 = score(doc=6803,freq=6.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.6058582 = fieldWeight in 6803, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.046875 = fieldNorm(doc=6803)
0.25 = coord(1/4)
- Abstract
- Reports results of a study, conducted at Rhode Island University Library, to determine whether cataloguing records known to be missing from a library consortium OPAC database could be identified using the database search features. Attempts to create lists of bibliographic records held by other libraries in the consortium using Boolean searching features failed due to search feature limitations. Samples of search logic were created, collections of records based on this logic were assembled manually and then compared with card catalogue of the single library. Results suggest that use of the Boolean OR operator to conduct the broadest possible search could find 56.000 of the library's missing records that were held by other libraries. Use of the Boolean AND operator to conduct the narrowest search found 85.000 missing records. A specific library search made of the records of the most likely consortium library to have overlaid the single library's holdings found that 80.000 of the single library's missing records were held by a specific library
-
¬The Fourth Text Retrieval Conference (TREC-4) (1996)
0.04
0.041540433 = product of:
0.16616173 = sum of:
0.16616173 = weight(_text_:held in 590) [ClassicSimilarity], result of:
0.16616173 = score(doc=590,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.4663898 = fieldWeight in 590, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0625 = fieldNorm(doc=590)
0.25 = coord(1/4)
- Abstract
- Proceedings of the 4th TREC-Conference held in Gaithersburg, MD, Nov 1-3, 1995. Aim of the conference was discussion on retrieval techniques for large trest collections. different research groups used different techniques, such as automatic thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief descriptions including timing and storage information
-
Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997)
0.04
0.041540433 = product of:
0.16616173 = sum of:
0.16616173 = weight(_text_:held in 744) [ClassicSimilarity], result of:
0.16616173 = score(doc=744,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.4663898 = fieldWeight in 744, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0625 = fieldNorm(doc=744)
0.25 = coord(1/4)
- Abstract
- TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
-
Harman, D.K.: ¬The first text retrieval conference : TREC-1, 1992 (1993)
0.04
0.041540433 = product of:
0.16616173 = sum of:
0.16616173 = weight(_text_:held in 2317) [ClassicSimilarity], result of:
0.16616173 = score(doc=2317,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.4663898 = fieldWeight in 2317, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0625 = fieldNorm(doc=2317)
0.25 = coord(1/4)
- Abstract
- Reports on the 1st Text Retrieval Conference (TREC-1) held in Rockville, MD, 4-6 Nov. 1992. The TREC experiment is being run by the National Institute of Standards and Technology to allow information retrieval researchers to scale up from small collection of data to larger sized experiments. Gropus of researchers have been provided with text documents compressed on CD-ROM. They used experimental retrieval system to search the text and evaluate the results
-
¬The Fifth Text Retrieval Conference (TREC-5) (1997)
0.04
0.041540433 = product of:
0.16616173 = sum of:
0.16616173 = weight(_text_:held in 4087) [ClassicSimilarity], result of:
0.16616173 = score(doc=4087,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.4663898 = fieldWeight in 4087, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0625 = fieldNorm(doc=4087)
0.25 = coord(1/4)
- Abstract
- Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
-
¬The Sixth Text Retrieval Conference (TREC-6) (1998)
0.04
0.041540433 = product of:
0.16616173 = sum of:
0.16616173 = weight(_text_:held in 5476) [ClassicSimilarity], result of:
0.16616173 = score(doc=5476,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.4663898 = fieldWeight in 5476, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0625 = fieldNorm(doc=5476)
0.25 = coord(1/4)
- Abstract
- Proceedings of the 6th TREC-confrerence held in Gaithersburgh, Maryland, Nov 19-21, 1997. Aim of the conference was discussion on retrieval techniques for large test collections. 51 research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
-
¬The Eleventh Text Retrieval Conference, TREC 2002 (2003)
0.04
0.041540433 = product of:
0.16616173 = sum of:
0.16616173 = weight(_text_:held in 5049) [ClassicSimilarity], result of:
0.16616173 = score(doc=5049,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.4663898 = fieldWeight in 5049, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0625 = fieldNorm(doc=5049)
0.25 = coord(1/4)
- Abstract
- Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
-
Harman, D.: Overview of the first Text Retrieval Conference (1993)
0.04
0.036347877 = product of:
0.14539151 = sum of:
0.14539151 = weight(_text_:held in 616) [ClassicSimilarity], result of:
0.14539151 = score(doc=616,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.40809107 = fieldWeight in 616, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0546875 = fieldNorm(doc=616)
0.25 = coord(1/4)
- Abstract
- The first Text Retrieval Conference (TREC-1) was held in early November and was attended by about 100 people working in the 25 participating groups. The goal of the conference was to bring research gropus together to discuss their work on a new large test collection. There was a large variety of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences among the systems affected performance
-
¬The Second Text Retrieval Conference : TREC-2 (1995)
0.03
0.031155325 = product of:
0.1246213 = sum of:
0.1246213 = weight(_text_:held in 2320) [ClassicSimilarity], result of:
0.1246213 = score(doc=2320,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.34979236 = fieldWeight in 2320, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.046875 = fieldNorm(doc=2320)
0.25 = coord(1/4)
- Abstract
- A special issue devoted to papers from the 2nd Text Retrieval Conference (TREC-2) held in Aug 93
-
Voorbij, H.: Title keywords and subject descriptors : a comparison of subject search entries of books in the humanities and social sciences (1998)
0.03
0.02596277 = product of:
0.10385108 = sum of:
0.10385108 = weight(_text_:held in 5721) [ClassicSimilarity], result of:
0.10385108 = score(doc=5721,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.29149362 = fieldWeight in 5721, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0390625 = fieldNorm(doc=5721)
0.25 = coord(1/4)
- Abstract
- In order to compare the value of subject descriptors and title keywords as entries to subject searches, two studies were carried out. Both studies concentrated on monographs in the humanities and social sciences, held by the online public access catalogue of the National Library of the Netherlands. In the first study, a comparison was made by subject librarians between the subject descriptors and the title keywords of 475 records. They could express their opinion on a scale from 1 (descriptor is exactly or almost the same as word in title) to 7 (descriptor does not appear in title at all). It was concluded that 37 per cent of the records are considerably enhanced by a subject descriptor, and 49 per cent slightly or considerably enhanced. In the second study, subject librarians performed subject searches using title keywords and subject descriptors on the same topic. The relative recall amounted to 48 per cent and 86 per cent respectively. Failure analysis revealed the reasons why so many records that were found by subject descriptors were not found by title keywords. First, although completely meaningless titles hardly ever appear, the title of a publication does not always offer sufficient clues for title keyword searching. In those cases, descriptors may enhance the record of a publication. A second and even more important task of subject descriptors is controlling the vocabulary. Many relevant titles cannot be retrieved by title keyword searching because of the wide diversity of ways of expressing a topic. Descriptors take away the burden of vocabulary control from the user.
-
Mandl, T.: Neue Entwicklungen bei den Evaluierungsinitiativen im Information Retrieval (2006)
0.02
0.019417599 = product of:
0.077670395 = sum of:
0.077670395 = weight(_text_:und in 975) [ClassicSimilarity], result of:
0.077670395 = score(doc=975,freq=14.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.51866364 = fieldWeight in 975, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=975)
0.25 = coord(1/4)
- Abstract
- Im Information Retrieval tragen Evaluierungsinitiativen erheblich zur empirisch fundierten Forschung bei. Mit umfangreichen Kollektionen und Aufgaben unterstützen sie die Standardisierung und damit die Systementwicklung. Die wachsenden Anforderungen hinsichtlich der Korpora und Anwendungsszenarien führten zu einer starken Diversifizierung innerhalb der Evaluierungsinitiativen. Dieser Artikel gibt einen Überblick über den aktuellen Stand der wichtigsten Evaluierungsinitiativen und neuen Trends.
- Source
- Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
-
Lohmann, H.: Verbesserung der Literatursuche durch Dokumentanreicherung und automatische Inhaltserschließung : Das Projekt 'KASCADE' an der Universitäts- und Landesbibliothek Düsseldorf (1999)
0.02
0.019067705 = product of:
0.07627082 = sum of:
0.07627082 = weight(_text_:und in 2221) [ClassicSimilarity], result of:
0.07627082 = score(doc=2221,freq=6.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.50931764 = fieldWeight in 2221, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.09375 = fieldNorm(doc=2221)
0.25 = coord(1/4)
- Imprint
- Köln : Fachhochschule, Fachbereich Bibliotheks- und Informationswesen
-
Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005)
0.02
0.018173939 = product of:
0.072695754 = sum of:
0.072695754 = weight(_text_:held in 82) [ClassicSimilarity], result of:
0.072695754 = score(doc=82,freq=2.0), product of:
0.35627222 = queryWeight, product of:
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.0675193 = queryNorm
0.20404553 = fieldWeight in 82, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.2765985 = idf(docFreq=616, maxDocs=44421)
0.02734375 = fieldNorm(doc=82)
0.25 = coord(1/4)
- Abstract
- Text retrieval technology targets a problem that is all too familiar: finding relevant information in large stores of electronic documents. The problem is an old one, with the first research conference devoted to the subject held in 1958 [11]. Since then the problem has continued to grow as more information is created in electronic form and more people gain electronic access. The advent of the World Wide Web, where anyone can publish so everyone must search, is a graphic illustration of the need for effective retrieval technology. The Text REtrieval Conference (TREC) is a workshop series designed to build the infrastructure necessary for the large-scale evaluation of text retrieval technology, thereby accelerating its transfer into the commercial sector. The series is sponsored by the U.S. National Institute of Standards and Technology (NIST) and the U.S. Department of Defense. At the time of this writing, there have been twelve TREC workshops and preparations for the thirteenth workshop are under way. Participants in the workshops have been drawn from the academic, commercial, and government sectors, and have included representatives from more than twenty different countries. These collective efforts have accomplished a great deal: a variety of large test collections have been built for both traditional ad hoc retrieval and related tasks such as cross-language retrieval, speech retrieval, and question answering; retrieval effectiveness has approximately doubled; and many commercial retrieval systems now contain technology first developed in TREC.
-
Mandl, T.: Web- und Multimedia-Dokumente : Neuere Entwicklungen bei der Evaluierung von Information Retrieval Systemen (2003)
0.02
0.017977204 = product of:
0.07190882 = sum of:
0.07190882 = weight(_text_:und in 2734) [ClassicSimilarity], result of:
0.07190882 = score(doc=2734,freq=12.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.48018923 = fieldWeight in 2734, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=2734)
0.25 = coord(1/4)
- Abstract
- Die Menge an Daten im Internet steigt weiter rapide an. Damit wächst auch der Bedarf an qualitativ hochwertigen Information Retrieval Diensten zur Orientierung und problemorientierten Suche. Die Entscheidung für die Benutzung oder Beschaffung von Information Retrieval Software erfordert aussagekräftige Evaluierungsergebnisse. Dieser Beitrag stellt neuere Entwicklungen bei der Evaluierung von Information Retrieval Systemen vor und zeigt den Trend zu Spezialisierung und Diversifizierung von Evaluierungsstudien, die den Realitätsgrad derErgebnisse erhöhen. DerSchwerpunkt liegt auf dem Retrieval von Fachtexten, Internet-Seiten und Multimedia-Objekten.
- Source
- Information - Wissenschaft und Praxis. 54(2003) H.4, S.203-210
-
Kluck, M.; Winter, M.: Topic-Entwicklung und Relevanzbewertung bei GIRT : ein Werkstattbericht (2006)
0.02
0.017977204 = product of:
0.07190882 = sum of:
0.07190882 = weight(_text_:und in 967) [ClassicSimilarity], result of:
0.07190882 = score(doc=967,freq=12.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.48018923 = fieldWeight in 967, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=967)
0.25 = coord(1/4)
- Abstract
- Der Zusammenhang zwischen Topic-Entwicklung und Relevanzbewertung wird anhand einiger Fallbeispiele aus der CLEF-Evaluierungskampagne 2005 diskutiert. Im fachspezifischen Retrievaltest für multilinguale Systeme wurden die Topics am Dokumentenbestand von GIRT entwickelt. Die Zusammenhänge von Topic-Formulierung und Interpretationsspielräumen bei der Relevanzbewertung werden untersucht.
- Source
- Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
-
Wolff, C.: Leistungsvergleich der Retrievaloberflächen zwischen Web und klassischen Expertensystemen (2001)
0.02
0.016990399 = product of:
0.067961596 = sum of:
0.067961596 = weight(_text_:und in 6870) [ClassicSimilarity], result of:
0.067961596 = score(doc=6870,freq=14.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.4538307 = fieldWeight in 6870, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=6870)
0.25 = coord(1/4)
- Abstract
- Die meisten Web-Auftritte der Hosts waren bisher für den Retrieval-Laien gedacht. Im Hintergrund steht dabei das Ziel: mehr Nutzung durch einfacheres Retrieval. Dieser Ansatz steht aber im Konflikt mit der wachsenden Datenmenge und Dokumentgröße, die eigentlich ein immer ausgefeilteres Retrieval verlangen. Häufig wird von Information Professionals die Kritik geäußert, dass die Webanwendungen einen Verlust an Relevanz bringen. Wie weit der Nutzer tatsächlich einen Kompromiss zwischen Relevanz und Vollständigkeit eingehen muss, soll in diesem Beitrag anhand verschiedener Host-Rechner quantifiziert werden
- Series
- Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis; 4
- Source
- Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
-
Günther, M.: Vermitteln Suchmaschinen vollständige Bilder aktueller Themen? : Untersuchung der Gewichtung inhaltlicher Aspekte von Suchmaschinenergebnissen in Deutschland und den USA (2016)
0.02
0.01653858 = product of:
0.06615432 = sum of:
0.06615432 = weight(_text_:und in 4068) [ClassicSimilarity], result of:
0.06615432 = score(doc=4068,freq=26.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.44176215 = fieldWeight in 4068, product of:
5.0990195 = tf(freq=26.0), with freq of:
26.0 = termFreq=26.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0390625 = fieldNorm(doc=4068)
0.25 = coord(1/4)
- Abstract
- Zielsetzung - Vor dem Hintergrund von Suchmaschinenverzerrungen sollte herausgefunden werden, ob sich die von Google und Bing vermittelten Bilder aktueller internationaler Themen in Deutschland und den USA hinsichtlich (1) Vollständigkeit, (2) Abdeckung und (3) Gewichtung der jeweiligen inhaltlichen Aspekte unterscheiden. Forschungsmethoden - Für die empirische Untersuchung wurde eine Methode aus Ansätzen der empirischen Sozialwissenschaften (Inhaltsanalyse) und der Informationswissenschaft (Retrievaltests) entwickelt und angewandt. Ergebnisse - Es zeigte sich, dass Google und Bing in Deutschland und den USA (1) keine vollständigen Bilder aktueller internationaler Themen vermitteln, dass sie (2) auf den ersten Trefferpositionen nicht die drei wichtigsten inhaltlichen Aspekte abdecken, und dass es (3) bei der Gewichtung der inhaltlichen Aspekte keine signifikanten Unterschiede gibt. Allerdings erfahren diese Ergebnisse Einschränkungen durch die Methodik und die Auswertung der empirischen Untersuchung. Schlussfolgerungen - Es scheinen tatsächlich inhaltliche Suchmaschinenverzerrungen vorzuliegen - diese könnten Auswirkungen auf die Meinungsbildung der Suchmaschinennutzer haben. Trotz großem Aufwand bei manueller, und qualitativ schlechteren Ergebnissen bei automatischer Untersuchung sollte dieses Thema weiter erforscht werden.
- Content
- Vgl.: https://yis.univie.ac.at/index.php/yis/article/view/1355. Diesem Beitrag liegt folgende Abschlussarbeit zugrunde: Günther, Markus: Welches Weltbild vermitteln Suchmaschinen? Untersuchung der Gewichtung inhaltlicher Aspekte von Google- und Bing-Ergebnissen in Deutschland und den USA zu aktuellen internationalen Themen . Masterarbeit (M.A.), Hochschule für Angewandte Wissenschaften Hamburg, 2015. Volltext: http://edoc.sub.uni-hamburg.de/haw/volltexte/2016/332.
-
Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001)
0.02
0.016410867 = product of:
0.06564347 = sum of:
0.06564347 = weight(_text_:und in 1261) [ClassicSimilarity], result of:
0.06564347 = score(doc=1261,freq=10.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.4383508 = fieldWeight in 1261, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=1261)
0.25 = coord(1/4)
- Abstract
- Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
- Source
- nfd Information - Wissenschaft und Praxis. 52(2001) H.7, S.381-392
-
Biebricher, P.; Fuhr, N.; Niewelt, B.: ¬Der AIR-Retrievaltest (1986)
0.02
0.015889753 = product of:
0.06355901 = sum of:
0.06355901 = weight(_text_:und in 4108) [ClassicSimilarity], result of:
0.06355901 = score(doc=4108,freq=6.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.42443132 = fieldWeight in 4108, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.078125 = fieldNorm(doc=4108)
0.25 = coord(1/4)
- Abstract
- Der Beitrag enthält eine Darstellung zur Durchführung und zu den Ergebnissen des Retrievaltests zum AIR/PHYS-Projekt. Er zählt mit seinen 309 Fragen und 15.000 Dokumenten zu den größten Retrievaltests, die bisher zur Evaluierung automatisierter Indexierungs- oder Retrievalverfahren vorgenommen wurden.
- Source
- Automatische Indexierung zwischen Forschung und Anwendung, Hrsg.: G. Lustig
-
Griesbaum, J.; Rittberger, M.; Bekavac, B.: Deutsche Suchmaschinen im Vergleich : AltaVista.de, Fireball.de, Google.de und Lycos.de (2002)
0.02
0.015889753 = product of:
0.06355901 = sum of:
0.06355901 = weight(_text_:und in 2159) [ClassicSimilarity], result of:
0.06355901 = score(doc=2159,freq=6.0), product of:
0.149751 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0675193 = queryNorm
0.42443132 = fieldWeight in 2159, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.078125 = fieldNorm(doc=2159)
0.25 = coord(1/4)
- Source
- Information und Mobilität: Optimierung und Vermeidung von Mobilität durch Information. Proceedings des 8. Internationalen Symposiums für Informationswissenschaft (ISI 2002), 7.-10.10.2002, Regensburg. Hrsg.: Rainer Hammwöhner, Christian Wolff, Christa Womser-Hacker