-
Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016)
0.04
0.04282636 = product of:
0.17130543 = sum of:
0.17130543 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
0.17130543 = score(doc=4179,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.38932347 = fieldWeight in 4179, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=4179)
0.25 = coord(1/4)
- Abstract
- In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
-
TREC: experiment and evaluation in information retrieval (2005)
0.04
0.039647277 = product of:
0.15858911 = sum of:
0.15858911 = weight(_text_:hosted in 761) [ClassicSimilarity], result of:
0.15858911 = score(doc=761,freq=4.0), product of:
0.5034649 = queryWeight, product of:
8.063882 = idf(docFreq=37, maxDocs=44421)
0.062434554 = queryNorm
0.31499538 = fieldWeight in 761, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
8.063882 = idf(docFreq=37, maxDocs=44421)
0.01953125 = fieldNorm(doc=761)
0.25 = coord(1/4)
- Abstract
- The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
- Footnote
- Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
-
Albrechtsen, H.: ISKO news (2007)
0.04
0.0392488 = product of:
0.1569952 = sum of:
0.1569952 = weight(_text_:hosted in 1095) [ClassicSimilarity], result of:
0.1569952 = score(doc=1095,freq=2.0), product of:
0.5034649 = queryWeight, product of:
8.063882 = idf(docFreq=37, maxDocs=44421)
0.062434554 = queryNorm
0.3118295 = fieldWeight in 1095, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.063882 = idf(docFreq=37, maxDocs=44421)
0.02734375 = fieldNorm(doc=1095)
0.25 = coord(1/4)
- Content
- "3rd ISKO Italy-UniMIB Meeting: Report More than 40 people attended the 3rd ISKO Italy meeting, again organized thanks to cooperation with the University of Milano Bicocca Library, despite a railway strike which impeded some planned speakers (Luca Rosati, Federica Paradisi, Cristiana Bettella) from reaching the venue. In the annual report on ISKO Italy activities and contacts, Claudio Gnoli announced that the 2010 international ISKO conference will be hosted in Rome, an organizing committee chaired by Fulvio Mazzocchi having just been constituted. The morning had an international flavour, as it was reconnected to the trends observed by Mela Bosch at the ISKO Spain conference recently held in Leon, showing an increase in the hermeneutic approach over the heuristic one, and especially to the Leon manifesto (http://www.iskoi.org/ ilc/leon.htm). This was promoted by Rick Szostak in his guest keynote address, concerning his proposal of non-disciplinary classification based on phenomena, theories, and methods. Melissa Tiberi and Barbara De Santis developed on their current research concerning semantics problems in equivalence relationships, and Cristiana Bettella (whose introduction was read by Caterina Barazia) on her one about humanistic knowledge, focusing on the double role played in it by the researcher. The afternoon was devoted to KO applications, starting with the experience of two university libraries (Milan Bicocca and Turin), with contribution of a third one in the discussion (Milan 1), in the use of KOSs to organize digital resources and links in the university web-space. Two emerging, promising domains of KO application were introduced by Paolo Franzese: semantic indexing of institutional archives, and by the DesignNet team: information visualization, exemplified in an impressive solution for thesauri. Finally, Andrea Marchitelli discussed hybridizations of social tagging and blogging with opacs, and Jiri Pika showed UDC-based search techniques in a Swiss multilingual OPAC. Presentations, abstracts, and photos will be progressively available from the event webpage (http://www.iskoi.org/doc/milano07.htm). - Claudio Gnoli.
-
Noerr, P.: ¬The Digital Library Tool Kit (2001)
0.03
0.034261085 = product of:
0.13704434 = sum of:
0.13704434 = weight(_text_:java in 774) [ClassicSimilarity], result of:
0.13704434 = score(doc=774,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.31145877 = fieldWeight in 774, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=774)
0.25 = coord(1/4)
- Footnote
- This Digital Library Tool Kit was sponsored by Sun Microsystems, Inc. to address some of the leading questions that academic institutions, public libraries, government agencies, and museums face in trying to develop, manage, and distribute digital content. The evolution of Java programming, digital object standards, Internet access, electronic commerce, and digital media management models is causing educators, CIOs, and librarians to rethink many of their traditional goals and modes of operation. New audiences, continuous access to collections, and enhanced services to user communities are enabled. As one of the leading technology providers to education and library communities, Sun is pleased to present this comprehensive introduction to digital libraries
-
Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003)
0.03
0.034261085 = product of:
0.13704434 = sum of:
0.13704434 = weight(_text_:java in 3758) [ClassicSimilarity], result of:
0.13704434 = score(doc=3758,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.31145877 = fieldWeight in 3758, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=3758)
0.25 = coord(1/4)
- Abstract
- The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
-
Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010)
0.03
0.034261085 = product of:
0.13704434 = sum of:
0.13704434 = weight(_text_:java in 935) [ClassicSimilarity], result of:
0.13704434 = score(doc=935,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.31145877 = fieldWeight in 935, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=935)
0.25 = coord(1/4)
- Abstract
- Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
-
Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007)
0.03
0.034261085 = product of:
0.13704434 = sum of:
0.13704434 = weight(_text_:java in 709) [ClassicSimilarity], result of:
0.13704434 = score(doc=709,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.31145877 = fieldWeight in 709, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=709)
0.25 = coord(1/4)
- Content
- "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
-
Piros, A.: Automatic interpretation of complex UDC numbers : towards support for library systems (2015)
0.03
0.034261085 = product of:
0.13704434 = sum of:
0.13704434 = weight(_text_:java in 3301) [ClassicSimilarity], result of:
0.13704434 = score(doc=3301,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.31145877 = fieldWeight in 3301, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=3301)
0.25 = coord(1/4)
- Abstract
- Analytico-synthetic and faceted classifications, such as Universal Decimal Classification (UDC) express content of documents with complex, pre-combined classification codes. Without classification authority control that would help manage and access structured notations, the use of UDC codes in searching and browsing is limited. Existing UDC parsing solutions are usually created for a particular database system or a specific task and are not widely applicable. The approach described in this paper provides a solution by which the analysis and interpretation of UDC notations would be stored into an intermediate format (in this case, in XML) by automatic means without any data or information loss. Due to its richness, the output file can be converted into different formats, such as standard mark-up and data exchange formats or simple lists of the recommended entry points of a UDC number. The program can also be used to create authority records containing complex UDC numbers which can be comprehensively analysed in order to be retrieved effectively. The Java program, as well as the corresponding schema definition it employs, is under continuous development. The current version of the interpreter software is now available online for testing purposes at the following web site: http://interpreter-eto.rhcloud.com. The future plan is to implement conversion methods for standard formats and to create standard online interfaces in order to make it possible to use the features of software as a service. This would result in the algorithm being able to be employed both in existing and future library systems to analyse UDC numbers without any significant programming effort.
-
Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (1998)
0.03
0.02997845 = product of:
0.1199138 = sum of:
0.1199138 = weight(_text_:java in 1493) [ClassicSimilarity], result of:
0.1199138 = score(doc=1493,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.2725264 = fieldWeight in 1493, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.02734375 = fieldNorm(doc=1493)
0.25 = coord(1/4)
- Abstract
- Some web sites "work" and some don't. Good web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls. You need to know who will be using the site, and what they'll be using it for. You need some idea of what you'd like to draw their attention to during their visit. Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable. Information Architecture for the World Wide Web is about applying the principles of architecture and library science to web site design. Each web site is like a public building, available for tourists and regulars alike to breeze through at their leisure. The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday. Most books on web development concentrate either on the aesthetics or the mechanics of the site. This book is about the framework that holds the two together. With this book, you learn how to design web sites and intranets that support growth, management, and ease of use. Special attention is given to: * The process behind architecting a large, complex site * Web site hierarchy design and organization Information Architecture for the World Wide Web is for webmasters, designers, and anyone else involved in building a web site. It's for novice web designers who, from the start, want to avoid the traps that result in poorly designed sites. It's for experienced web designers who have already created sites but realize that something "is missing" from their sites and want to improve them. It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their web pages into a cohesive site. The authors are two of the principals of Argus Associates, a web consulting firm. At Argus, they have created information architectures for web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical.
-
Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003)
0.03
0.02803486 = product of:
0.11213944 = sum of:
0.11213944 = weight(_text_:hosted in 2166) [ClassicSimilarity], result of:
0.11213944 = score(doc=2166,freq=2.0), product of:
0.5034649 = queryWeight, product of:
8.063882 = idf(docFreq=37, maxDocs=44421)
0.062434554 = queryNorm
0.22273538 = fieldWeight in 2166, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.063882 = idf(docFreq=37, maxDocs=44421)
0.01953125 = fieldNorm(doc=2166)
0.25 = coord(1/4)
- Abstract
- NACO was established in 1976 and is hosted by the Library of Congress. At the beginning of 2003, nearly 400 institutions were involved in this undertaking, including 43 institutions from outside the United States.6 Despite the enormous success of NACO and the impressive annual growth of the initiative, there are requirements for participation that form an obstacle for many institutions: they have to follow the Anglo-American Cataloguing Rules (AACR2) and employ the MARC217 data format. Participating institutions also have to belong to either OCLC (Online Computer Library Center) or RLG (Research Libraries Group) in order to be able to contribute records, and they have to provide a specified minimum number of authority records per year. A recent proof of concept project of the Library of Congress, OCLC and the German National Library-Virtual International Authority File (VIAF)8-will, in its first phase, test automatic linking of the records of the Library of Congress Name Authority File (LCNAF) and the German Personal Name Authority File by using matching algorithms and software developed by OCLC. The results are expected to form the basis of a "Virtual International Authority File". The project will then test the maintenance of the virtual authority file by employing the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)9 to harvest the metadata for new, updated, and deleted records. When using the "Virtual International Authority File" a cataloguer will be able to check the system to see whether the authority record he wants to establish already exists. The final phase of the project will test possibilities for displaying records in the preferred language and script of the end user. Currently, there are still some clear limitations associated with the ways in which authority records are used by memory institutions. One of the main problems has to do with limited access: generally only large institutions or those that are part of a library network have unlimited online access to permanently updated authority records. Smaller institutions outside these networks usually have to fall back on less efficient ways of obtaining authority data, or have no access at all. Cross-domain sharing of authority data between libraries, archives, museums and other memory institutions simply does not happen at present. Public users are, by and large, not even aware that such things as name authority records exist and are excluded from access to these information resources.
-
Markoff, J.: Researchers announce advance in image-recognition software (2014)
0.03
0.02803486 = product of:
0.11213944 = sum of:
0.11213944 = weight(_text_:hosted in 2875) [ClassicSimilarity], result of:
0.11213944 = score(doc=2875,freq=2.0), product of:
0.5034649 = queryWeight, product of:
8.063882 = idf(docFreq=37, maxDocs=44421)
0.062434554 = queryNorm
0.22273538 = fieldWeight in 2875, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.063882 = idf(docFreq=37, maxDocs=44421)
0.01953125 = fieldNorm(doc=2875)
0.25 = coord(1/4)
- Content
- "Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at Stanford University, teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain. The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate. The advances may make it possible to better catalog and search for the billions of images and hours of video available online, which are often poorly described and archived. At the moment, search engines like Google rely largely on written language accompanying an image or video to ascertain what it contains. "I consider the pixel data in images and video to be the dark matter of the Internet," said Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, who led the research with Andrej Karpathy, a graduate student. "We are now starting to illuminate it." Dr. Li and Mr. Karpathy published their research as a Stanford University technical report. The Google team published their paper on arXiv.org, an open source site hosted by Cornell University.
-
Ding, J.: Can data die? : why one of the Internet's oldest images lives on wirhout its subjects's consent (2021)
0.03
0.02803486 = product of:
0.11213944 = sum of:
0.11213944 = weight(_text_:hosted in 1424) [ClassicSimilarity], result of:
0.11213944 = score(doc=1424,freq=2.0), product of:
0.5034649 = queryWeight, product of:
8.063882 = idf(docFreq=37, maxDocs=44421)
0.062434554 = queryNorm
0.22273538 = fieldWeight in 1424, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.063882 = idf(docFreq=37, maxDocs=44421)
0.01953125 = fieldNorm(doc=1424)
0.25 = coord(1/4)
- Abstract
- But despite this progress, almost 2 years later, the use of Lenna continues. The image appears on the internet in 30+ different languages in the last decade, including 10+ languages in 2021. The image's spread across digital geographies has mirrored this geographical growth, moving from mostly .org domains before 1990 to over 100 different domains today, notably .com and .edu, along with others. Within the .edu world, the Lenna image continues to appear in homework questions, class slides and to be hosted on educational and research sites, ensuring that it is passed down to new generations of engineers. Whether it's due to institutional negligence or defiance, it seems that for now, the image is here to stay.
-
Tennant, R.: Library catalogs : the wrong solution (2003)
0.03
0.025695814 = product of:
0.102783255 = sum of:
0.102783255 = weight(_text_:java in 2558) [ClassicSimilarity], result of:
0.102783255 = score(doc=2558,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.23359407 = fieldWeight in 2558, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0234375 = fieldNorm(doc=2558)
0.25 = coord(1/4)
- Content
- - User Interface hostility - Recently I used the Library catalogs of two public libraries, new products from two major library vendors. A link an one catalog said "Knowledge Portal," whatever that was supposed to mean. Clicking an it brought you to two choices: Z39.50 Bibliographic Sites and the World Wide Web. No public library user will have the faintest clue what Z39.50 is. The other catalog launched a Java applet that before long froze my web browser so badly I was forced to shut the program down. Pick a popular book and pretend you are a library patron. Choose three to five libraries at random from the lib web-cats site (pick catalogs that are not using your system) and attempt to find your book. Try as much as possible to see the system through the eyes of your patrons-a teenager, a retiree, or an older faculty member. You may not always like what you see. Now go back to your own system and try the same thing. - What should the public see? - Our users deserve an information system that helps them find all different kinds of resources-books, articles, web pages, working papers in institutional repositories-and gives them the tools to focus in an what they want. This is not, and should not be, the library catalog. It must communicate with the catalog, but it will also need to interface with other information systems, such as vendor databases and web search engines. What will such a tool look like? We are seeing the beginnings of such a tool in the current offerings of cross-database search tools from a few vendors (see "Cross-Database Search," LJ 10/15/01, p. 29ff). We are in the early stages of developing the kind of robust, userfriendly tool that will be required before we can pull our catalogs from public view. Meanwhile, we can begin by making what we have easier to understand and use."
-
OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009)
0.03
0.025695814 = product of:
0.102783255 = sum of:
0.102783255 = weight(_text_:java in 378) [ClassicSimilarity], result of:
0.102783255 = score(doc=378,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.23359407 = fieldWeight in 378, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0234375 = fieldNorm(doc=378)
0.25 = coord(1/4)
- Content
- Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
-
Chafe, W.L.: Meaning and the structure of language (1980)
0.02
0.023752626 = product of:
0.095010504 = sum of:
0.095010504 = weight(_text_:und in 1220) [ClassicSimilarity], result of:
0.095010504 = score(doc=1220,freq=32.0), product of:
0.13847354 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.062434554 = queryNorm
0.6861275 = fieldWeight in 1220, product of:
5.656854 = tf(freq=32.0), with freq of:
32.0 = termFreq=32.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=1220)
0.25 = coord(1/4)
- Classification
- ET 400 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Allgemeines
ET 430 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Synchrone Semantik / Allgemeines (Gesamtdarstellungen)
- RVK
- ET 400 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Allgemeines
ET 430 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Synchrone Semantik / Allgemeines (Gesamtdarstellungen)
-
Boßmeyer, C.: UNIMARC und MAB : Strukturunterschiede und Kompatibilitätsfragen (1995)
0.02
0.023509003 = product of:
0.09403601 = sum of:
0.09403601 = weight(_text_:und in 2436) [ClassicSimilarity], result of:
0.09403601 = score(doc=2436,freq=6.0), product of:
0.13847354 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.062434554 = queryNorm
0.67909014 = fieldWeight in 2436, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.125 = fieldNorm(doc=2436)
0.25 = coord(1/4)
- Source
- Zeitschrift für Bibliothekswesen und Bibliographie. 42(1995) H.5, S.465-480
-
Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002)
0.02
0.02141318 = product of:
0.08565272 = sum of:
0.08565272 = weight(_text_:java in 2211) [ClassicSimilarity], result of:
0.08565272 = score(doc=2211,freq=2.0), product of:
0.44000798 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.062434554 = queryNorm
0.19466174 = fieldWeight in 2211, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.01953125 = fieldNorm(doc=2211)
0.25 = coord(1/4)
- Content
- The JAVA applet is available at <http://ella.slis.indiana.edu/~junzhang/dlib/IV.html>. A prototype of this interface has been developed and is available at <http://ella.slis.indiana.edu/~junzhang/dlib/IV.html>. The D-Lib search interface is available at <http://www.dlib.org/Architext/AT-dlib2query.html>.
-
SimTown : baue deine eigene Stadt (1995)
0.02
0.02077922 = product of:
0.08311688 = sum of:
0.08311688 = weight(_text_:und in 5546) [ClassicSimilarity], result of:
0.08311688 = score(doc=5546,freq=12.0), product of:
0.13847354 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.062434554 = queryNorm
0.60023654 = fieldWeight in 5546, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.078125 = fieldNorm(doc=5546)
0.25 = coord(1/4)
- Abstract
- SimTown wurde entwickelt, um Kindern die wichtigsten Konzepte der Wirtschaft (Angebot und Nachfrage), Ökologie (Rohstoffe, Umweltverschmutzung und Recycling) und Städteplanung (Gleichgewicht zwischen Wohnraum, Arbeitsplätzen und Erholungsstätten) auf einfache und unterhaltsame Art nahezubringen
- Issue
- PC CD-ROM Windows. 8 Jahre und älter.
-
Atzbach, R.: ¬Der Rechtschreibtrainer : Rechtschreibübungen und -spiele für die 5. bis 9. Klasse (1996)
0.02
0.020570379 = product of:
0.082281515 = sum of:
0.082281515 = weight(_text_:und in 5647) [ClassicSimilarity], result of:
0.082281515 = score(doc=5647,freq=6.0), product of:
0.13847354 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.062434554 = queryNorm
0.5942039 = fieldWeight in 5647, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.109375 = fieldNorm(doc=5647)
0.25 = coord(1/4)
- Abstract
- Alte und neue Rechtschreibregeln
- Issue
- MS-DOS und Windows.
-
Geiß, D.: Gewerbliche Schutzrechte : Rationelle Nutzung ihrer Informations- und Rechtsfunktion in Wirtschaft und Wissenschaft Bericht über das 29.Kolloquium der Technischen Universität Ilmenau über Patentinformation und gewerblichen Rechtsschutz (2007)
0.02
0.020359393 = product of:
0.08143757 = sum of:
0.08143757 = weight(_text_:und in 1629) [ClassicSimilarity], result of:
0.08143757 = score(doc=1629,freq=8.0), product of:
0.13847354 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.062434554 = queryNorm
0.58810925 = fieldWeight in 1629, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.09375 = fieldNorm(doc=1629)
0.25 = coord(1/4)
- Source
- Information - Wissenschaft und Praxis. 58(2007) H.6/7, S.376-379