Search (1244 results, page 5 of 63)

  • × language_ss:"e"
  1. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.04
    0.04316724 = product of:
      0.17266896 = sum of:
        0.17266896 = weight(_text_:java in 272) [ClassicSimilarity], result of:
          0.17266896 = score(doc=272,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.38932347 = fieldWeight in 272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=272)
      0.25 = coord(1/4)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
  2. Eddings, J.: How the Internet works (1994) 0.04
    0.04316724 = product of:
      0.17266896 = sum of:
        0.17266896 = weight(_text_:java in 2514) [ClassicSimilarity], result of:
          0.17266896 = score(doc=2514,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.38932347 = fieldWeight in 2514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2514)
      0.25 = coord(1/4)
    
    Abstract
    How the Internet Works promises "an exciting visual journey down the highways and byways of the Internet," and it delivers. The book's high quality graphics and simple, succinct text make it the ideal book for beginners; however it still has much to offer for Net vets. This book is jam- packed with cool ways to visualize how the Net works. The first section visually explores how TCP/IP, Winsock, and other Net connectivity mysteries work. This section also helps you understand how e-mail addresses and domains work, what file types mean, and how information travels across the Net. Part 2 unravels the Net's underlying architecture, including good information on how routers work and what is meant by client/server architecture. The third section covers your own connection to the Net through an Internet Service Provider (ISP), and how ISDN, cable modems, and Web TV work. Part 4 discusses e-mail, spam, newsgroups, Internet Relay Chat (IRC), and Net phone calls. In part 5, you'll find out how other Net tools, such as gopher, telnet, WAIS, and FTP, can enhance your Net experience. The sixth section takes on the World Wide Web, including everything from how HTML works to image maps and forms. Part 7 looks at other Web features such as push technology, Java, ActiveX, and CGI scripting, while part 8 deals with multimedia on the Net. Part 9 shows you what intranets are and covers groupware, and shopping and searching the Net. The book wraps up with part 10, a chapter on Net security that covers firewalls, viruses, cookies, and other Web tracking devices, plus cryptography and parental controls.
  3. Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016) 0.04
    0.04316724 = product of:
      0.17266896 = sum of:
        0.17266896 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
          0.17266896 = score(doc=4179,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.38932347 = fieldWeight in 4179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4179)
      0.25 = coord(1/4)
    
    Abstract
    In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
  4. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.04
    0.042406846 = product of:
      0.16962738 = sum of:
        0.16962738 = weight(_text_:hyperlink in 5746) [ClassicSimilarity], result of:
          0.16962738 = score(doc=5746,freq=2.0), product of:
            0.49147287 = queryWeight, product of:
              7.809647 = idf(docFreq=48, maxDocs=44421)
              0.06293151 = queryNorm
            0.3451409 = fieldWeight in 5746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.809647 = idf(docFreq=48, maxDocs=44421)
              0.03125 = fieldNorm(doc=5746)
      0.25 = coord(1/4)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  5. Barjak, F.; Li, X.; Thelwall, M.: Which factors explain the Web impact of scientists' personal homepages? (2007) 0.04
    0.042406846 = product of:
      0.16962738 = sum of:
        0.16962738 = weight(_text_:hyperlink in 1073) [ClassicSimilarity], result of:
          0.16962738 = score(doc=1073,freq=2.0), product of:
            0.49147287 = queryWeight, product of:
              7.809647 = idf(docFreq=48, maxDocs=44421)
              0.06293151 = queryNorm
            0.3451409 = fieldWeight in 1073, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.809647 = idf(docFreq=48, maxDocs=44421)
              0.03125 = fieldNorm(doc=1073)
      0.25 = coord(1/4)
    
    Abstract
    In recent years, a considerable body of Webometric research has used hyperlinks to generate indicators for the impact of Web documents and the organizations that created them. The relationship between this Web impact and other, offline impact indicators has been explored for entire universities, departments, countries, and scientific journals, but not yet for individual scientists-an important omission. The present research closes this gap by investigating factors that may influence the Web impact (i.e., inlink counts) of scientists' personal homepages. Data concerning 456 scientists from five scientific disciplines in six European countries were analyzed, showing that both homepage content and personal and institutional characteristics of the homepage owners had significant relationships with inlink counts. A multivariate statistical analysis confirmed that full-text articles are the most linked-to content in homepages. At the individual homepage level, hyperlinks are related to several offline characteristics. Notable differences regarding total inlinks to scientists' homepages exist between the scientific disciplines and the countries in the sample. There also are both gender and age effects: fewer external inlinks (i.e., links from other Web domains) to the homepages of female and of older scientists. There is only a weak relationship between a scientist's recognition and homepage inlinks and, surprisingly, no relationship between research productivity and inlink counts. Contrary to expectations, the size of collaboration networks is negatively related to hyperlink counts. Some of the relationships between hyperlinks to homepages and the properties of their owners can be explained by the content that the homepage owners put on their homepage and their level of Internet use; however, the findings about productivity and collaborations do not seem to have a simple, intuitive explanation. Overall, the results emphasize the complexity of the phenomenon of Web linking, when analyzed at the level of individual pages.
  6. Liu, B.: Web data mining : exploring hyperlinks, contents, and usage data (2011) 0.04
    0.042406846 = product of:
      0.16962738 = sum of:
        0.16962738 = weight(_text_:hyperlink in 1354) [ClassicSimilarity], result of:
          0.16962738 = score(doc=1354,freq=2.0), product of:
            0.49147287 = queryWeight, product of:
              7.809647 = idf(docFreq=48, maxDocs=44421)
              0.06293151 = queryNorm
            0.3451409 = fieldWeight in 1354, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.809647 = idf(docFreq=48, maxDocs=44421)
              0.03125 = fieldNorm(doc=1354)
      0.25 = coord(1/4)
    
    Abstract
    Web mining aims to discover useful information and knowledge from the Web hyperlink structure, page contents, and usage data. Although Web mining uses many conventional data mining techniques, it is not purely an application of traditional data mining due to the semistructured and unstructured nature of the Web data and its heterogeneity. It has also developed many of its own algorithms and techniques. Liu has written a comprehensive text on Web data mining. Key topics of structure mining, content mining, and usage mining are covered both in breadth and in depth. His book brings together all the essential concepts and algorithms from related areas such as data mining, machine learning, and text processing to form an authoritative and coherent text. The book offers a rich blend of theory and practice, addressing seminal research ideas, as well as examining the technology from a practical point of view. It is suitable for students, researchers and practitioners interested in Web mining both as a learning text and a reference book. Lecturers can readily use it for classes on data mining, Web mining, and Web search. Additional teaching materials such as lecture slides, datasets, and implemented algorithms are available online.
  7. Noerr, P.: ¬The Digital Library Tool Kit (2001) 0.03
    0.03453379 = product of:
      0.13813516 = sum of:
        0.13813516 = weight(_text_:java in 774) [ClassicSimilarity], result of:
          0.13813516 = score(doc=774,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.31145877 = fieldWeight in 774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=774)
      0.25 = coord(1/4)
    
    Footnote
    This Digital Library Tool Kit was sponsored by Sun Microsystems, Inc. to address some of the leading questions that academic institutions, public libraries, government agencies, and museums face in trying to develop, manage, and distribute digital content. The evolution of Java programming, digital object standards, Internet access, electronic commerce, and digital media management models is causing educators, CIOs, and librarians to rethink many of their traditional goals and modes of operation. New audiences, continuous access to collections, and enhanced services to user communities are enabled. As one of the leading technology providers to education and library communities, Sun is pleased to present this comprehensive introduction to digital libraries
  8. Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003) 0.03
    0.03453379 = product of:
      0.13813516 = sum of:
        0.13813516 = weight(_text_:java in 3758) [ClassicSimilarity], result of:
          0.13813516 = score(doc=3758,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.31145877 = fieldWeight in 3758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=3758)
      0.25 = coord(1/4)
    
    Abstract
    The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
  9. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.03
    0.03453379 = product of:
      0.13813516 = sum of:
        0.13813516 = weight(_text_:java in 935) [ClassicSimilarity], result of:
          0.13813516 = score(doc=935,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.31145877 = fieldWeight in 935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=935)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
  10. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.03
    0.03453379 = product of:
      0.13813516 = sum of:
        0.13813516 = weight(_text_:java in 709) [ClassicSimilarity], result of:
          0.13813516 = score(doc=709,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.31145877 = fieldWeight in 709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=709)
      0.25 = coord(1/4)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  11. Piros, A.: Automatic interpretation of complex UDC numbers : towards support for library systems (2015) 0.03
    0.03453379 = product of:
      0.13813516 = sum of:
        0.13813516 = weight(_text_:java in 3301) [ClassicSimilarity], result of:
          0.13813516 = score(doc=3301,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.31145877 = fieldWeight in 3301, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=3301)
      0.25 = coord(1/4)
    
    Abstract
    Analytico-synthetic and faceted classifications, such as Universal Decimal Classification (UDC) express content of documents with complex, pre-combined classification codes. Without classification authority control that would help manage and access structured notations, the use of UDC codes in searching and browsing is limited. Existing UDC parsing solutions are usually created for a particular database system or a specific task and are not widely applicable. The approach described in this paper provides a solution by which the analysis and interpretation of UDC notations would be stored into an intermediate format (in this case, in XML) by automatic means without any data or information loss. Due to its richness, the output file can be converted into different formats, such as standard mark-up and data exchange formats or simple lists of the recommended entry points of a UDC number. The program can also be used to create authority records containing complex UDC numbers which can be comprehensively analysed in order to be retrieved effectively. The Java program, as well as the corresponding schema definition it employs, is under continuous development. The current version of the interpreter software is now available online for testing purposes at the following web site: http://interpreter-eto.rhcloud.com. The future plan is to implement conversion methods for standard formats and to create standard online interfaces in order to make it possible to use the features of software as a service. This would result in the algorithm being able to be employed both in existing and future library systems to analyse UDC numbers without any significant programming effort.
  12. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.03
    0.031805135 = product of:
      0.12722054 = sum of:
        0.12722054 = weight(_text_:hyperlink in 1006) [ClassicSimilarity], result of:
          0.12722054 = score(doc=1006,freq=2.0), product of:
            0.49147287 = queryWeight, product of:
              7.809647 = idf(docFreq=48, maxDocs=44421)
              0.06293151 = queryNorm
            0.25885567 = fieldWeight in 1006, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.809647 = idf(docFreq=48, maxDocs=44421)
              0.0234375 = fieldNorm(doc=1006)
      0.25 = coord(1/4)
    
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
  13. Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (1998) 0.03
    0.030217065 = product of:
      0.12086826 = sum of:
        0.12086826 = weight(_text_:java in 1493) [ClassicSimilarity], result of:
          0.12086826 = score(doc=1493,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.2725264 = fieldWeight in 1493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.02734375 = fieldNorm(doc=1493)
      0.25 = coord(1/4)
    
    Abstract
    Some web sites "work" and some don't. Good web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls. You need to know who will be using the site, and what they'll be using it for. You need some idea of what you'd like to draw their attention to during their visit. Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable. Information Architecture for the World Wide Web is about applying the principles of architecture and library science to web site design. Each web site is like a public building, available for tourists and regulars alike to breeze through at their leisure. The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday. Most books on web development concentrate either on the aesthetics or the mechanics of the site. This book is about the framework that holds the two together. With this book, you learn how to design web sites and intranets that support growth, management, and ease of use. Special attention is given to: * The process behind architecting a large, complex site * Web site hierarchy design and organization Information Architecture for the World Wide Web is for webmasters, designers, and anyone else involved in building a web site. It's for novice web designers who, from the start, want to avoid the traps that result in poorly designed sites. It's for experienced web designers who have already created sites but realize that something "is missing" from their sites and want to improve them. It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their web pages into a cohesive site. The authors are two of the principals of Argus Associates, a web consulting firm. At Argus, they have created information architectures for web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical.
  14. Tennant, R.: Library catalogs : the wrong solution (2003) 0.03
    0.025900342 = product of:
      0.10360137 = sum of:
        0.10360137 = weight(_text_:java in 2558) [ClassicSimilarity], result of:
          0.10360137 = score(doc=2558,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.23359407 = fieldWeight in 2558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2558)
      0.25 = coord(1/4)
    
    Content
    - User Interface hostility - Recently I used the Library catalogs of two public libraries, new products from two major library vendors. A link an one catalog said "Knowledge Portal," whatever that was supposed to mean. Clicking an it brought you to two choices: Z39.50 Bibliographic Sites and the World Wide Web. No public library user will have the faintest clue what Z39.50 is. The other catalog launched a Java applet that before long froze my web browser so badly I was forced to shut the program down. Pick a popular book and pretend you are a library patron. Choose three to five libraries at random from the lib web-cats site (pick catalogs that are not using your system) and attempt to find your book. Try as much as possible to see the system through the eyes of your patrons-a teenager, a retiree, or an older faculty member. You may not always like what you see. Now go back to your own system and try the same thing. - What should the public see? - Our users deserve an information system that helps them find all different kinds of resources-books, articles, web pages, working papers in institutional repositories-and gives them the tools to focus in an what they want. This is not, and should not be, the library catalog. It must communicate with the catalog, but it will also need to interface with other information systems, such as vendor databases and web search engines. What will such a tool look like? We are seeing the beginnings of such a tool in the current offerings of cross-database search tools from a few vendors (see "Cross-Database Search," LJ 10/15/01, p. 29ff). We are in the early stages of developing the kind of robust, userfriendly tool that will be required before we can pull our catalogs from public view. Meanwhile, we can begin by making what we have easier to understand and use."
  15. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.03
    0.025900342 = product of:
      0.10360137 = sum of:
        0.10360137 = weight(_text_:java in 378) [ClassicSimilarity], result of:
          0.10360137 = score(doc=378,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.23359407 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=378)
      0.25 = coord(1/4)
    
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
  16. Chafe, W.L.: Meaning and the structure of language (1980) 0.02
    0.023941686 = product of:
      0.095766746 = sum of:
        0.095766746 = weight(_text_:und in 1220) [ClassicSimilarity], result of:
          0.095766746 = score(doc=1220,freq=32.0), product of:
            0.13957573 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06293151 = queryNorm
            0.6861275 = fieldWeight in 1220, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1220)
      0.25 = coord(1/4)
    
    Classification
    ET 400 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Allgemeines
    ET 430 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Synchrone Semantik / Allgemeines (Gesamtdarstellungen)
    RVK
    ET 400 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Allgemeines
    ET 430 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Synchrone Semantik / Allgemeines (Gesamtdarstellungen)
  17. Boßmeyer, C.: UNIMARC und MAB : Strukturunterschiede und Kompatibilitätsfragen (1995) 0.02
    0.023696126 = product of:
      0.094784506 = sum of:
        0.094784506 = weight(_text_:und in 2436) [ClassicSimilarity], result of:
          0.094784506 = score(doc=2436,freq=6.0), product of:
            0.13957573 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06293151 = queryNorm
            0.67909014 = fieldWeight in 2436, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.125 = fieldNorm(doc=2436)
      0.25 = coord(1/4)
    
    Source
    Zeitschrift für Bibliothekswesen und Bibliographie. 42(1995) H.5, S.465-480
  18. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.02
    0.02158362 = product of:
      0.08633448 = sum of:
        0.08633448 = weight(_text_:java in 2211) [ClassicSimilarity], result of:
          0.08633448 = score(doc=2211,freq=2.0), product of:
            0.44351026 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06293151 = queryNorm
            0.19466174 = fieldWeight in 2211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=2211)
      0.25 = coord(1/4)
    
    Content
    The JAVA applet is available at <http://ella.slis.indiana.edu/~junzhang/dlib/IV.html>. A prototype of this interface has been developed and is available at <http://ella.slis.indiana.edu/~junzhang/dlib/IV.html>. The D-Lib search interface is available at <http://www.dlib.org/Architext/AT-dlib2query.html>.
  19. SimTown : baue deine eigene Stadt (1995) 0.02
    0.020944614 = product of:
      0.083778456 = sum of:
        0.083778456 = weight(_text_:und in 5546) [ClassicSimilarity], result of:
          0.083778456 = score(doc=5546,freq=12.0), product of:
            0.13957573 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06293151 = queryNorm
            0.60023654 = fieldWeight in 5546, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.078125 = fieldNorm(doc=5546)
      0.25 = coord(1/4)
    
    Abstract
    SimTown wurde entwickelt, um Kindern die wichtigsten Konzepte der Wirtschaft (Angebot und Nachfrage), Ökologie (Rohstoffe, Umweltverschmutzung und Recycling) und Städteplanung (Gleichgewicht zwischen Wohnraum, Arbeitsplätzen und Erholungsstätten) auf einfache und unterhaltsame Art nahezubringen
    Issue
    PC CD-ROM Windows. 8 Jahre und älter.
  20. Atzbach, R.: ¬Der Rechtschreibtrainer : Rechtschreibübungen und -spiele für die 5. bis 9. Klasse (1996) 0.02
    0.02073411 = product of:
      0.08293644 = sum of:
        0.08293644 = weight(_text_:und in 5647) [ClassicSimilarity], result of:
          0.08293644 = score(doc=5647,freq=6.0), product of:
            0.13957573 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06293151 = queryNorm
            0.5942039 = fieldWeight in 5647, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.109375 = fieldNorm(doc=5647)
      0.25 = coord(1/4)
    
    Abstract
    Alte und neue Rechtschreibregeln
    Issue
    MS-DOS und Windows.

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 804
  • m 312
  • el 104
  • s 92
  • i 21
  • n 17
  • x 13
  • r 11
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications