Search (4809 results, page 4 of 241)

  • × year_i:[2000 TO 2010}
  1. Conrad, J.G.; Schriber, C.P.: Managing déjà vu : collection building for the identification of nonidentical duplicate documents (2006) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 59) [ClassicSimilarity], result of:
          0.1716406 = score(doc=59,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 59, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=59)
      0.25 = coord(1/4)
    
    Abstract
    As online document collections continue to expand, both on the Web and in proprietary environments, the need for duplicate detection becomes more critical. Few users wish to retrieve search results consisting of sets of duplicate documents, whether identical duplicates or close variants. The goal of this work is to facilitate (a) investigations into the phenomenon of near duplicates and (b) algorithmic approaches to minimizing its deleterious effect on search results. Harnessing the expertise of both client-users and professional searchers, we establish principled methods to generate a test collection for identifying and handling nonidentical duplicate documents. We subsequently examine a flexible method of characterizing and comparing documents to permit the identification of near duplicates. This method has produced promising results following an extensive evaluation using a production-based test collection created by domain experts.
  2. Decurtins, C.; Norrie, M.C.; Signer, B.: Putting the gloss on paper : a framework for cross-media annotation (2003) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 933) [ClassicSimilarity], result of:
          0.1716406 = score(doc=933,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=933)
      0.25 = coord(1/4)
    
    Abstract
    We present a general framework for cross-media annotation that can be used to support the many different forms and uses of annotation. Specifically, we discuss the need for digital annotation of printed materials and describe how various technologies for digitally augmented paper can be used in support of work practices. The state of the art in terms of both commercial and research solutions is described in some detail, with an analysis of the extent to which they can support both the writing and reading activities associated with annotation. Our framework is based on an extension of the information server that was developed within the Paper++ project to support enhanced reading. It is capable of handling both formal and informal annotation across printed and digital media, exploiting a range of technologies for information capture and display. A prototype demonstrator application for mammography is presented to illustrate both the functionality of the framework and the status of existing technologies.
  3. Zhan, J.; Loh, H.T.: Using latent semantic indexing to improve the accuracy of document clustering (2007) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 1264) [ClassicSimilarity], result of:
          0.1716406 = score(doc=1264,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 1264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=1264)
      0.25 = coord(1/4)
    
    Abstract
    Document clustering is a significant research issue in information retrieval and text mining. Traditionally, most clustering methods were based on the vector space model which has a few limitations such as high dimensionality and weakness in handling synonymous and polysemous problems. Latent semantic indexing (LSI) is able to deal with such problems to some extent. Previous studies have shown that using LSI could reduce the time in clustering a large document set while having little effect on clustering accuracy. However, when conducting clustering upon a small document set, the accuracy is more concerned than efficiency. In this paper, we demonstrate that LSI can improve the clustering accuracy of a small document set and we also recommend the dimensions needed to achieve the best clustering performance.
  4. Nicholson, S.; Smith, C.A.: Using lessons from health care to protect the privacy of library users : guidelines for the de-identification of library data based on HIPAA (2007) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 1451) [ClassicSimilarity], result of:
          0.1716406 = score(doc=1451,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 1451, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=1451)
      0.25 = coord(1/4)
    
    Abstract
    Although libraries have employed policies to protect the data about use of their services, these policies are rarely specific or standardized. Since 1996, the U.S. health care system has been grappling with the Health Insurance Portability and Accountability Act (HIPAA; Health Insurance Portability and Accountability Act, 1996), which is designed to provide those handling personal health information with standardized, definitive instructions as to the protection of data. In this work, the authors briefly discuss the present situation of privacy policies about library use data, outline the HIPAA guidelines to understand parallels between the two, and finally propose methods to create a de-identified library data warehouse based on HIPAA for the protection of user privacy.
  5. Singh, S.; Dey, L.: ¬A rough-fuzzy document grading system for customized text information retrieval (2005) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 2007) [ClassicSimilarity], result of:
          0.1716406 = score(doc=2007,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 2007, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=2007)
      0.25 = coord(1/4)
    
    Abstract
    Due to the large repository of documents available on the web, users are usually inundated by a large volume of information, most of which is found to be irrelevant. Since user perspectives vary, a client-side text filtering system that learns the user's perspective can reduce the problem of irrelevant retrieval. In this paper, we have provided the design of a customized text information filtering system which learns user preferences and modifies the initial query to fetch better documents. It uses a rough-fuzzy reasoning scheme. The rough-set based reasoning takes care of natural language nuances, like synonym handling, very elegantly. The fuzzy decider provides qualitative grading to the documents for the user's perusal. We have provided the detailed design of the various modules and some results related to the performance analysis of the system.
  6. Seetharama, S.: Knowledge orgnization system over time (2006) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 2466) [ClassicSimilarity], result of:
          0.1716406 = score(doc=2466,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 2466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=2466)
      0.25 = coord(1/4)
    
    Abstract
    Presents an overview of the concepts, techniques and tools of knowledge organization. Knowledge Organization Systems (KOS), such as, authority files, glossaries, dictionaries, subject headings lists, classification schemes, taxonomies, categorization schemes, thesauri, semantic networks, ontologies, etc are involved in organizing, retrieval and dissemination of information. Specifically, KOS are useful -in Users Needs Assessment, Database(s) creation, Online systems, OPAC and for generation of information services and products. Practical experiences suggest that new information systems to access a wider range, quantity, and forms of information and enable provision of a variety of information services and products to meet the needs of people, KOS can perform functions even in an electronic/digital environment as efficiently and effectively as in a traditional library environment. Hence, it is necessary that information professionals and computer-scientists work in an integrated manner to enhance information handling operations in electronic/digital libraries.
  7. Twidale, M.B.; Gruzd, A.A.; Nichols, D.M.: Writing in the library : exploring tighter integration of digital library use with the writing process (2008) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 3045) [ClassicSimilarity], result of:
          0.1716406 = score(doc=3045,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 3045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=3045)
      0.25 = coord(1/4)
    
    Abstract
    Information provision via digital libraries often separates the writing process from that of information searching. In this paper we investigate the potential of a tighter integration between searching for information in digital libraries and using those results in academic writing. We consider whether it may sometimes be advantageous to encourage searching while writing instead of the more conventional approach of searching first and then writing. The provision of ambient search is explored, taking the user's ongoing writing as a source for the generation of search terms used to provide possibly useful results. A rapid prototyping approach exploiting web services was used as a way to explore the design space and to have working demonstrations that can provoke reactions, design suggestions and discussions about desirable functionalities and interfaces. This design process and some preliminary user studies are described. The results of these studies lead to a consideration of issues arising in exploring this design space, including handling irrelevant results and the particular challenges of evaluation.
  8. Morrison, P.J.: Tagging and searching : search retrieval effectiveness of folksonomies on the World Wide Web (2008) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 3109) [ClassicSimilarity], result of:
          0.1716406 = score(doc=3109,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 3109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=3109)
      0.25 = coord(1/4)
    
    Abstract
    Many Web sites have begun allowing users to submit items to a collection and tag them with keywords. The folksonomies built from these tags are an interesting topic that has seen little empirical research. This study compared the search information retrieval (IR) performance of folksonomies from social bookmarking Web sites against search engines and subject directories. Thirty-four participants created 103 queries for various information needs. Results from each IR system were collected and participants judged relevance. Folksonomy search results overlapped with those from the other systems, and documents found by both search engines and folksonomies were significantly more likely to be judged relevant than those returned by any single IR system type. The search engines in the study had the highest precision and recall, but the folksonomies fared surprisingly well. Del.icio.us was statistically indistinguishable from the directories in many cases. Overall the directories were more precise than the folksonomies but they had similar recall scores. Better query handling may enhance folksonomy IR performance further. The folksonomies studied were promising, and may be able to improve Web search performance.
  9. Kruk, S.R.; McDaniel, B.: Goals of semantic digital libraries (2009) 0.04
    0.04291015 = product of:
      0.1716406 = sum of:
        0.1716406 = weight(_text_:handling in 365) [ClassicSimilarity], result of:
          0.1716406 = score(doc=365,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.41578686 = fieldWeight in 365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=365)
      0.25 = coord(1/4)
    
    Abstract
    Digital libraries have become commodity in the current world of Internet. More and more information is produced, and more and more non-digital information is being rendered available. The new, more user friendly, community-oriented technologies used throughout the Internet are raising the bar of expectations. Digital libraries cannot stand still with their technologies; if not for the sake of handling rapidly growing amount and diversity of information, they must provide for better user experience matching and overgrowing standards set by the industry. The next generation of digital libraries combine technological solutions, such as P2P, SOA, or Grid, with recent research on semantics and social networks. These solutions are put into practice to answer a variety of requirements imposed on digital libraries.
  10. Noerr, P.: ¬The Digital Library Tool Kit (2001) 0.04
    0.036116935 = product of:
      0.14446774 = sum of:
        0.14446774 = weight(_text_:java in 774) [ClassicSimilarity], result of:
          0.14446774 = score(doc=774,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.31145877 = fieldWeight in 774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=774)
      0.25 = coord(1/4)
    
    Footnote
    This Digital Library Tool Kit was sponsored by Sun Microsystems, Inc. to address some of the leading questions that academic institutions, public libraries, government agencies, and museums face in trying to develop, manage, and distribute digital content. The evolution of Java programming, digital object standards, Internet access, electronic commerce, and digital media management models is causing educators, CIOs, and librarians to rethink many of their traditional goals and modes of operation. New audiences, continuous access to collections, and enhanced services to user communities are enabled. As one of the leading technology providers to education and library communities, Sun is pleased to present this comprehensive introduction to digital libraries
  11. Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003) 0.04
    0.036116935 = product of:
      0.14446774 = sum of:
        0.14446774 = weight(_text_:java in 3758) [ClassicSimilarity], result of:
          0.14446774 = score(doc=3758,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.31145877 = fieldWeight in 3758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=3758)
      0.25 = coord(1/4)
    
    Abstract
    The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
  12. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.04
    0.036116935 = product of:
      0.14446774 = sum of:
        0.14446774 = weight(_text_:java in 709) [ClassicSimilarity], result of:
          0.14446774 = score(doc=709,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.31145877 = fieldWeight in 709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=709)
      0.25 = coord(1/4)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  13. Ford, N.: Cognitive styles and virtual environments (2000) 0.04
    0.03575846 = product of:
      0.14303385 = sum of:
        0.14303385 = weight(_text_:handling in 5604) [ClassicSimilarity], result of:
          0.14303385 = score(doc=5604,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.34648907 = fieldWeight in 5604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5604)
      0.25 = coord(1/4)
    
    Abstract
    Virtual environments enable a given information space to be traversed in different ways by different individuals, using different routes and navigation tools. However, we urgently need robust user models to enable us to optimize the deployment of such facilities. Research into individual differences suggests that the notion of cognitive style may be useful in this prcess. Many such styles have been identified. However, it is argued that Pask's work on holist and serialist strategies and associated styles of information processing are particularly promising in terms of the development of adaptive information systems. These constructs are reviewed, and their potential utility in 'real-world' situations assessed. Suggestions are made for ways in which they could be used in the development of virtual environments capable of optimizing the stylistic strengths and complementing the weaknesses of individual users. The role of neural networks in handling the essentially fuzzy nature of user models is discussed. Neural networks may be useful in dynamically mapping users' navigational behavior onto user models to anable them to generate appropriate adaptive responses. However, their learning capacity may also be particularly useful in the process of improving systems performance and in the cumulative development of more robust user models
  14. White, H.D.: Author cocitation analysis and pearson's r (2003) 0.04
    0.03575846 = product of:
      0.14303385 = sum of:
        0.14303385 = weight(_text_:handling in 3119) [ClassicSimilarity], result of:
          0.14303385 = score(doc=3119,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.34648907 = fieldWeight in 3119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3119)
      0.25 = coord(1/4)
    
    Abstract
    In their article "Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient," Ahlgren, Jarneving, and Rousseau fault traditional author cocitation analysis (ACA) for using Pearson's r as a measure of similarity between authors because it fails two tests of stability of measurement. The instabilities arise when rs are recalculated after a first coherent group of authors has been augmented by a second coherent group with whom the first has little or no cocitation. However, AJ&R neither cluster nor map their data to demonstrate how fluctuations in rs will mislead the analyst, and the problem they pose is remote from both theory and practice in traditional ACA. By entering their own rs into multidimensional scaling and clustering routines, I show that, despite r's fluctuations, clusters based an it are much the same for the combined groups as for the separate groups. The combined groups when mapped appear as polarized clumps of points in two-dimensional space, confirming that differences between the groups have become much more important than differences within the groups-an accurate portrayal of what has happened to the data. Moreover, r produces clusters and maps very like those based an other coefficients that AJ&R mention as possible replacements, such as a cosine similarity measure or a chi square dissimilarity measure. Thus, r performs well enough for the purposes of ACA. Accordingly, I argue that qualitative information revealing why authors are cocited is more important than the cautions proposed in the AJ&R critique. I include notes an topics such as handling the diagonal in author cocitation matrices, lognormalizing data, and testing r for significance.
  15. Dawson, H.: Using the Internet for political research : practical tips and hints (2003) 0.04
    0.03575846 = product of:
      0.14303385 = sum of:
        0.14303385 = weight(_text_:handling in 5511) [ClassicSimilarity], result of:
          0.14303385 = score(doc=5511,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.34648907 = fieldWeight in 5511, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5511)
      0.25 = coord(1/4)
    
    Content
    Key Features - Includes chapters an key topics such as elections, parliaments, prime ministers and presidents - Contains case studies of typical searches - Highlights useful political science Internet sites. The Author Heather Dawson is an Assistant Librarian at the British Library of Political and Economic Science and Politics and Government Editor of SOSIG (The Social Science Information Gateway). Readership This book is aimed at researchers, librarians/ information workers handling reference enquiries and students. Contents Getting started an using the Internet - search tools available, information gateways, search terms, getting further information Political science research - getting started, key organisations, key web sites Elections - using the Internet to follow an election, information an electoral systems, tracing election results, future developments (e.g. digital archive) Political parties - what is online, constructing searches, key sites, where to find information Heads of state (Presidents and Prime Ministers) - tracing news stories, Speeches, directories worldwide Parliaments - what is happening in Parliament, tracing MPs, Bills, devolution and regional parliaments in the UK; links to useful sites with directories of parliaments worldwide Government departments - tracing legislation, statistics and consultation papers Political science education - information an courses, grants, libraries, searching library catalogues, tracing academic staff members Keeping up-to-date - political news stories, political research and forthcoming events
  16. Byfield, P.: Managing information in a complex organisation (2005) 0.04
    0.03575846 = product of:
      0.14303385 = sum of:
        0.14303385 = weight(_text_:handling in 5512) [ClassicSimilarity], result of:
          0.14303385 = score(doc=5512,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.34648907 = fieldWeight in 5512, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5512)
      0.25 = coord(1/4)
    
    Abstract
    Considers the problems large organisation have in handling the vast amounts of information in their system, such as: the culture of communication (committees/meetings/networks); `bureaucracy'; technology - IT 'versus' operational departments; structures (hierarchy and reporting lines); information ownership; resources. The book considers how these problems can be overcome: by both individual information professionals and departments or units.
  17. Valente, A.; Luzi, D.: Different contexts in electronic communication : some remarks on the communicability of scientific knowledge (2000) 0.04
    0.03575846 = product of:
      0.14303385 = sum of:
        0.14303385 = weight(_text_:handling in 5544) [ClassicSimilarity], result of:
          0.14303385 = score(doc=5544,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.34648907 = fieldWeight in 5544, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5544)
      0.25 = coord(1/4)
    
    Abstract
    This paper explores how and to what extent the appearance and wide use of Information and Communication Technologies (ICTs) may enhance scientific communication and knowledge. The first part analyses the general boundaries of scientific communication, focusing on the use of email. It summarises and develops the results of relevant international studies and surveys on computer-mediated communication; it identifies, on the one hand, the principal social settings and contexts in which email is used and, on the other, the characteristic features which determine specific communication models. The analysis provides evidence of the various factors which determine the dynamics of electronic communication and which, more specifically, define the difference between business and scientific communication. The second part of the paper explores the close relationship between communication and knowledge in the scientific sector and the role played by ICTs. The assumption that ICTs ought to enhance the acquisition, sharing and transmission of scientific knowledge is questioned by the distinction between explicit and tacit knowledge: ICTs ultimately appear to provide a strong drive only to processes of explicit/coded knowledge handling. Nevertheless, exploring the main components of tacit knowledge in depth, and considering recent ICT-based applications, it is possible to foresee new opportunities for the creation and dissemination of knowledge through networks.
  18. LaBarre, K.; Cochrane, P.A.: Facet analysis as a knowledge management tool on the Internet (2006) 0.04
    0.03575846 = product of:
      0.14303385 = sum of:
        0.14303385 = weight(_text_:handling in 2489) [ClassicSimilarity], result of:
          0.14303385 = score(doc=2489,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.34648907 = fieldWeight in 2489, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2489)
      0.25 = coord(1/4)
    
    Abstract
    In 2001, a group of information architects involved in designing websites, and knowledge management specialists involved in creating access to corporate knowledge bases appeared to have re-discovered facet analysis and faceted classification. These groups have been instrumental in creating new and different ways of handling digital content of the Internet. Some of these practitioners explicitly use the forms and language of facet analysis and faceted classification, while others seem to do so implicitly. Following a brief overview of the work and discussions on facets and faceted classification in recent years, we focus on our observations about new information resources which seem more in line with the Fourth law of Library Science ("Save the time of the reader") than most library OPACs today. These new developments on the Internet point to a partial grasp of a disciplined approach to subject access. This is where Ranganathan and Neelameghan's approach needs to be reviewed for the new audience of information system designers. A report on the work undertaken by us forms a principal part of this paper.
  19. Hjoerland, B.; Pedersen, K.N.: ¬A substantive theory of classification for information retrieval (2005) 0.04
    0.03575846 = product of:
      0.14303385 = sum of:
        0.14303385 = weight(_text_:handling in 2892) [ClassicSimilarity], result of:
          0.14303385 = score(doc=2892,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.34648907 = fieldWeight in 2892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2892)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - To suggest that a theory of classification for information retrieval (IR), asked for by Spärck Jones in a 1970 paper, presupposes a full implementation of a pragmatic understanding. Part of the Journal of Documentation celebration, "60 years of the best in information research". Design/methodology/approach - Literature-based conceptual analysis, taking Sparck Jones as its starting-point. Analysis involves distinctions between "positivism" and "pragmatism" and "classical" versus Kuhnian understandings of concepts. Findings - Classification, both manual and automatic, for retrieval benefits from drawing upon a combination of qualitative and quantitative techniques, a consideration of theories of meaning, and the adding of top-down approaches to IR in which divisions of labour, domains, traditions, genres, document architectures etc. are included as analytical elements and in which specific IR algorithms are based on the examination of specific literatures. Introduces an example illustrating the consequences of a full implementation of a pragmatist understanding when handling homonyms. Practical implications - Outlines how to classify from a pragmatic-philosophical point of view. Originality/value - Provides, emphasizing a pragmatic understanding, insights of importance to classification for retrieval, both manual and automatic. - Vgl. auch: Szostak, R.: Classification, interdisciplinarity, and the study of science. In: Journal of documentation. 64(2008) no.3, S.319-332.
  20. Skov, M.; Larsen, B.; Ingwersen, P.: Inter and intra-document contexts applied in polyrepresentation for best match IR (2008) 0.04
    0.03575846 = product of:
      0.14303385 = sum of:
        0.14303385 = weight(_text_:handling in 3117) [ClassicSimilarity], result of:
          0.14303385 = score(doc=3117,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.34648907 = fieldWeight in 3117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3117)
      0.25 = coord(1/4)
    
    Abstract
    The principle of polyrepresentation offers a theoretical framework for handling multiple contexts in information retrieval (IR). This paper presents an empirical laboratory study of polyrepresentation in restricted mode of the information space with focus on inter and intra-document features. The Cystic Fibrosis test collection indexed in the best match system InQuery constitutes the experimental setting. Overlaps between five functionally and/or cognitively different document representations are identified. Supporting the principle of polyrepresentation, results show that in general overlaps generated by three or four representations of different nature have higher precision than those generated from two representations or the single fields. This result pertains to both structured and unstructured query mode in best match retrieval, however, with the latter query mode demonstrating higher performance. The retrieval overlaps containing search keys from the bibliographic references provide the best retrieval performance and minor MeSH terms the worst. It is concluded that a highly structured query language is necessary when implementing the principle of polyrepresentation in a best match IR system because the principle is inherently Boolean. Finally a re-ranking test shows promising results when search results are re-ranked according to precision obtained in the overlaps whilst re-ranking by citations seems less useful when integrated into polyrepresentative applications.

Languages

  • d 4222
  • e 544
  • m 11
  • es 2
  • f 2
  • ru 2
  • s 2
  • el 1
  • More… Less…

Types

  • a 3579
  • m 813
  • el 227
  • x 212
  • s 188
  • i 48
  • r 30
  • n 8
  • b 7
  • l 5
  • More… Less…

Themes

Subjects

Classifications