Search (1243 results, page 4 of 63)

  • × language_ss:"e"
  1. Ho, Y.-S.; Kahn, M.: ¬A bibliometric study of highly cited reviews in the Science Citation Index expanded(TM) (2014) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 2203) [ClassicSimilarity], result of:
          0.22427888 = score(doc=2203,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2203)
      0.25 = coord(1/4)
    
    Abstract
    Some 1,857 highly cited reviews, namely those cited at least 1,000 times since publication to 2011, were identified using the data hosted on the Science Citation Index ExpandedT database (Thomson Reuters, New York, NY) between 1899 and 2011. The data are disaggregated by publication date, citation counts, journals, Web of Science® (Thomson Reuters) subject areas, citation life cycles, and publications by Nobel Prize winners. Six indicators, total publications, independent publications, collaborative publications, first-author publications, corresponding-author publications, and single-author publications were applied to evaluate publication of institutions and countries. Among the highly cited reviews, 33% were single-author, 61% were single-institution, and 83% were single-country reviews. The United States ranked top for all 6 indicators. The G7 (United States, United Kingdom, Germany, Canada, France, Japan, and Italy) countries were the site of almost all the highly cited reviews. The top 12 most productive institutions were all located in the United States with Harvard University (Cambridge, MA) the leader. The top 3 most productive journals were Chemical Reviews, Nature, and the Annual Review of Biochemistry. In addition, the impact of the reviews was analyzed by total citations from publication to 2011, citations in 2011, and citation in publication year.
  2. Jarrahi, M.H.; Sawyer, S.: Theorizing on the take-up of social technologies, organizational policies and norms, and consultants' knowledge-sharing practices (2015) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 2614) [ClassicSimilarity], result of:
          0.22427888 = score(doc=2614,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 2614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2614)
      0.25 = coord(1/4)
    
    Abstract
    We identify the effects of specific organizational norms, arrangements, and policies regarding uses of social technologies for informal knowledge sharing by consultants. For this study, the term social technologies refers to the fast-evolving suite of tools such as traditional applications like e-mail, phone, and instant messenger; emerging social networking platforms (often known as social media) such as blogs and wikis; public social networking sites (i.e., Facebook, Twitter, and LinkedIn); and enterprise social networking technologies that are specifically hosted within one organization's computing environment (i.e., Socialtext). Building from structuration theory, the analysis presented focuses on the knowledge practices of consultants related to their uses of social technologies and the ways in which organizational norms and policies influence these practices. A primary contribution of this research is a detailed contextualization of social technology uses by knowledge workers. As many organizations are allowing social media-enabled knowledge sharing to develop organically, most corporate policy toward these platforms remains defensive, not strategic, limiting opportunities. Implications for uses and expectations of social technologies arising from this research will help organizations craft relevant policies and rules to best support technology-enabled informal knowledge practices.
  3. Strobel, S.; Marín-Arraiza, P.: Metadata for scientific audiovisual media : current practices and perspectives of the TIB / AV-portal (2015) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 4667) [ClassicSimilarity], result of:
          0.22427888 = score(doc=4667,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 4667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4667)
      0.25 = coord(1/4)
    
    Abstract
    Descriptive metadata play a key role in finding relevant search results in large amounts of unstructured data. However, current scientific audiovisual media are provided with little metadata, which makes them hard to find, let alone individual sequences. In this paper, the TIB / AV-Portal is presented as a use case where methods concerning the automatic generation of metadata, a semantic search and cross-lingual retrieval (German/English) have already been applied. These methods result in a better discoverability of the scientific audiovisual media hosted in the portal. Text, speech, and image content of the video are automatically indexed by specialised GND (Gemeinsame Normdatei) subject headings. A semantic search is established based on properties of the GND ontology. The cross-lingual retrieval uses English 'translations' that were derived by an ontology mapping (DBpedia i. a.). Further ways of increasing the discoverability and reuse of the metadata are publishing them as Linked Open Data and interlinking them with other data sets.
  4. Introne, J.; Erickson, I.: Designing sustainable online support : examining the effects of design change in 49 online health support communities (2020) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 762) [ClassicSimilarity], result of:
          0.22427888 = score(doc=762,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=762)
      0.25 = coord(1/4)
    
    Abstract
    Online social support communities can significantly improve health outcomes for individuals living with disease. Although they are well studied in the literature, little research examines how sociotechnical design changes influence the sustainability of support communities for different medical conditions. We compare the impact of a single design change on 49 disease-specific health support forums hosted on the WebMD platform, a popular online health information service. A statistical analysis showcases changes in posting patterns before and after the design intervention; a subsequent interpretive examination of forum content reveals how the design change affected members' perceived affordances of the platform. Our findings suggest that, despite differences between communities, the design change triggered a common set of cascading effects: it made it difficult for core users to create and maintain relationships, that led them to ultimately leave the site, and, in turn, reduced the activity drawing newcomers to the platform. Using these findings, we argue that the design of sustainable and robust online communities must account for systemic, sociotechnical dynamics.
  5. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 1884) [ClassicSimilarity], result of:
          0.22427888 = score(doc=1884,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 1884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1884)
      0.25 = coord(1/4)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
  6. Wedlake, S.; Coward, C.; Lee, J.H.: How games can support misinformation education : a sociocultural perspective (2024) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 2408) [ClassicSimilarity], result of:
          0.22427888 = score(doc=2408,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 2408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2408)
      0.25 = coord(1/4)
    
    Abstract
    This study uses a sociocultural perspective, which views literacy as embedded in people's daily practices and shaped by social contexts, to explore how a misinformation escape room can support learning about misinformation. While the sociocultural perspective has a rich theoretical foundation, it has rarely been used to examine, much less evaluate, information and media literacy interventions. In this paper, we posit that the topic of misinformation makes a strong case for using the sociocultural model and explore a misinformation escape room through this lens. We present findings of a nationwide study of an online misinformation escape room with post-game debrief discussion conducted at 10 public libraries that hosted 53 game sessions involving 211 players. The mixed methods study finds the game and accompanying debrief supported players in reflecting upon social media platform infrastructures, the psychological and emotional dimensions of misinformation, and how their personal behaviors intersect with online misinformation. We discuss how the sociocultural perspective can enrich our understanding of the role played by certain attributes of the game-narrative, debrief, and collaboration-thereby providing insights for the design of media and information literacy interventions.
  7. Smiraglia, R.P.: ISKO 11's diverse bookshelf : an editorial (2011) 0.06
    0.055506192 = product of:
      0.22202477 = sum of:
        0.22202477 = weight(_text_:hosted in 555) [ClassicSimilarity], result of:
          0.22202477 = score(doc=555,freq=4.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44099355 = fieldWeight in 555, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.02734375 = fieldNorm(doc=555)
      0.25 = coord(1/4)
    
    Abstract
    As we all know, Knowledge Organization (KO) is a pretty broad domain. Although the concept-theoretic approach to classification is at the core along with several other important pieces of what we call classification theory, both the intension and the extension of the domain are represented by broad trajectories. Arguably, the biennial conferences represent way stations within the matrix of the domain-points in time when we pause to take stock of our current research. Also, because each conference is hosted and planned by a regional chapter, each then reflects peculiar parameters of the intersections of intensional and extensional trajectories. Perhaps because the domain of knowledge itself is so immense, so also is our corporate attempt to grapple with the theoretical and applied aspects of its organization. Furthermore, because of the breadth of our domain, many possibilities exist for its representation, depending on the constitution of the research front (or fronts) at any moment in time. That is, research in the domain stretches in all directions from its solid theoretical core down many much more granular roadways. Thus by analyzing the activity and contents of these metaphorical way stations-that is, by bring the tools of domain analysis to bear on our own biennial conferences-we are able to visualize the moment in time represented by the accumulated scholarship generated by each conference. 2010's 11th International ISKO Conference in Rome offered the latest opportunity for analysis on a broad scale.
    To take advantage of the wonderful Italian weather, ISKO's 2010 conference was moved from the usual August to February; the venue was the Sapienza University (officially Sapienza - Università di Roma) and the conference took place 23-26 February 2010. The conference was organized and hosted by ISKO Italy and the Faculty of Philosophy of Sapienza University. Each morning as attendees arrived, we were treated to the garden pictured in Figure 1, and especially interesting was the fountain and the statue of St. Francis. Of course, the mystery was the turtle at St. Francis' foot, which looks quite like part of the statue but turned out to be real. The peaceful gardens were just a hallmark of the contemplative nature of the conference. Officially the 11th International ISKO Conference, the theme was "Paradigms and Conceptual Systems in Knowledge Organization." The proceedings and the conference program together listed 65 presentations, of which 64 were actually presented and 61 had papers included in the proceedings (or, 4 papers were presented but not included in the proceedings, and 1 paper included in the proceedings was not presented). Although space is insufficient for a full analysis, following from my editorial following ISKO 10 (Smiraglia 2008), I will use this space to paint a brief bibliometric portrait of the domain at the core of this conference. Data for this analysis come from the PDF of the proceedings; all citations for all papers were pasted in an Excel spreadsheet, where the citations were variously delimited for the following analyses. The original file is available on my blog: http://lazykoblog.wordpress.com/.
  8. Gibson, P.: Professionals' perfect Web world in sight : users want more information on the Web, and vendors attempt to provide (1998) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 2656) [ClassicSimilarity], result of:
          0.20556651 = score(doc=2656,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2656)
      0.25 = coord(1/4)
    
    Abstract
    Many information professionals feel that the time is still far off when the WWW can offer the combined funtionality and content of traditional online and CD-ROM databases, but there have been a number of recent Web developments to reflect on. Describes the testing and launch by Ovid of its Java client which, in effect, allows access to its databases on the Web with full search functionality, and the initiative of Euromonitor in providing Web access to its whole collection of consumer research reports and its entire database of business sources. Also reviews the service of a newcomer to the information scene, Information Quest (IQ) founded by Dawson Holdings which has made an agreement with Infonautics to offer access to its Electric Library database thus adding over 1.000 reference, consumer and business publications to its Web based journal service
  9. Nieuwenhuysen, P.; Vanouplines, P.: Document plus program hybrids on the Internet and their impact on information transfer (1998) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 2893) [ClassicSimilarity], result of:
          0.20556651 = score(doc=2893,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 2893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2893)
      0.25 = coord(1/4)
    
    Abstract
    Examines some of the advanced tools, techniques, methods and standards related to the Internet and WWW which consist of hybrids of documents and software, called 'document program hybrids'. Early Internet systems were based on having documents on one side and software on the other, neatly separated, apart from one another and without much interaction, so that the static document can also exist without computers and networks. Documentation program hybrids blur this classical distinction and all components are integrated, interwoven and exist in synergy with each other. Illustrates the techniques with particular reference to practical examples, including: dara collections and dedicated software; advanced HTML features on the WWW, multimedia viewer and plug in software for Internet and WWW browsers; VRML; interaction through a Web server with other servers and with instruments; adaptive hypertext provided by the server; 'webbots' or 'knowbots' or 'searchbots' or 'metasearch engines' or intelligent software agents; Sun's Java; Microsoft's ActiveX; program scripts for HTML and Web browsers; cookies; and Internet push technology with Webcasting channels
  10. Mills, T.; Moody, K.; Rodden, K.: Providing world wide access to historical sources (1997) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 3697) [ClassicSimilarity], result of:
          0.20556651 = score(doc=3697,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 3697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3697)
      0.25 = coord(1/4)
    
    Abstract
    A unique collection of historical material covering the lives and events of an English village between 1400 and 1750 has been made available via a WWW enabled information retrieval system. Since the expected readership of the documents ranges from school children to experienced researchers, providing this information in an easily accessible form has offered many challenges requiring tools to aid searching and browsing. The file structure of the document collection was replaced by an database, enabling query results to be presented on the fly. A Java interface displays each user's context in a form that allows for easy and intuitive relevance feedback
  11. Maarek, Y.S.: WebCutter : a system for dynamic and tailorable site mapping (1997) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 3739) [ClassicSimilarity], result of:
          0.20556651 = score(doc=3739,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 3739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3739)
      0.25 = coord(1/4)
    
    Abstract
    Presents an approach that integrates searching and browsing in a manner that improves both paradigms. When browsing is the primary task, it enables semantic content-based tailoring of Web maps in both the generation as well as the visualization phases. When search is the primary task, it enables contextualization of the results by augmenting them with the documents' neighbourhoods. This approach is embodied in WebCutter, a client-server system fully integrated with Web software. WebCutter consists of a map generator running off a standard Web server and a map visualization client implemented as a Java applet runalble from any standard Web browser and requiring no installation or external plug-in application. WebCutter is in beta stage and is in the process of being integrated into the Lotus Domino application product line
  12. Pan, B.; Gay, G.; Saylor, J.; Hembrooke, H.: One digital library, two undergraduate casses, and four learning modules : uses of a digital library in cassrooms (2006) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 907) [ClassicSimilarity], result of:
          0.20556651 = score(doc=907,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=907)
      0.25 = coord(1/4)
    
    Abstract
    The KMODDL (kinematic models for design digital library) is a digital library based on a historical collection of kinematic models made of steel and bronze. The digital library contains four types of learning modules including textual materials, QuickTime virtual reality movies, Java simulations, and stereolithographic files of the physical models. The authors report an evaluation study on the uses of the KMODDL in two undergraduate classes. This research reveals that the users in different classes encountered different usability problems, and reported quantitatively different subjective experiences. Further, the results indicate that depending on the subject area, the two user groups preferred different types of learning modules, resulting in different uses of the available materials and different learning outcomes. These findings are discussed in terms of their implications for future digital library design.
  13. Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 2167) [ClassicSimilarity], result of:
          0.20556651 = score(doc=2167,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 2167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2167)
      0.25 = coord(1/4)
    
    Abstract
    The Indiana University School of Library and Information Science opened a new research laboratory in January 2003; The Indiana University School of Library and Information Science Information Processing Laboratory [IU IP Lab]. The purpose of the new laboratory is to facilitate collaboration between scientists in the department in the areas of information retrieval (IR) and information visualization (IV) research. The lab has several areas of focus. These include grid and cluster computing, and a standard Java-based software platform to support plug and play research datasets, a selection of standard IR modules and standard IV algorithms. Future development includes software to enable researchers to contribute datasets, IR algorithms, and visualization algorithms into the standard environment. We decided early on to use OAI-PMH as a resource discovery tool because it is consistent with our mission.
  14. Song, R.; Luo, Z.; Nie, J.-Y.; Yu, Y.; Hon, H.-W.: Identification of ambiguous queries in web search (2009) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 3441) [ClassicSimilarity], result of:
          0.20556651 = score(doc=3441,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 3441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3441)
      0.25 = coord(1/4)
    
    Abstract
    It is widely believed that many queries submitted to search engines are inherently ambiguous (e.g., java and apple). However, few studies have tried to classify queries based on ambiguity and to answer "what the proportion of ambiguous queries is". This paper deals with these issues. First, we clarify the definition of ambiguous queries by constructing the taxonomy of queries from being ambiguous to specific. Second, we ask human annotators to manually classify queries. From manually labeled results, we observe that query ambiguity is to some extent predictable. Third, we propose a supervised learning approach to automatically identify ambiguous queries. Experimental results show that we can correctly identify 87% of labeled queries with the approach. Finally, by using our approach, we estimate that about 16% of queries in a real search log are ambiguous.
  15. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 3605) [ClassicSimilarity], result of:
          0.20556651 = score(doc=3605,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 3605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3605)
      0.25 = coord(1/4)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
  16. Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017) 0.05
    0.051391628 = product of:
      0.20556651 = sum of:
        0.20556651 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
          0.20556651 = score(doc=4615,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.46718815 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4615)
      0.25 = coord(1/4)
    
    Abstract
    Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
  17. Yancey, T.; Clarke, D.; Carson, J.: Lexicography without limits : a Web-based solution (1999) 0.04
    0.044855773 = product of:
      0.1794231 = sum of:
        0.1794231 = weight(_text_:hosted in 694) [ClassicSimilarity], result of:
          0.1794231 = score(doc=694,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.3563766 = fieldWeight in 694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.03125 = fieldNorm(doc=694)
      0.25 = coord(1/4)
    
    Abstract
    Web-based technology enables virtual production environments to be created in which teams of both in-house staff and remotely-based contractors can work together on thesaurus construction and indexing projects. For the past four years Synapse, the Knowledge Link Corporation has been developing a sophisticated web-based thesaurus construction, indexing and knowledge management software application. The Gale Group has been licensing the application since 1997, providing a virtual production environment for large-scale thesaurus construction and reference-content indexing projects. Synapse Corporation and The Gale Group jointly present the session "Lexicography Without Limits - A Web-Based Solution" to illustrate how web-based technology provides new solutions for the tasks of vocabulary development and indexing. Vocabulary development and indexing projects frequently require project teams to be assembled using contract lexicographers, indexers and editors to supplement in-house resources. Synapse Corporation, a company specializing in providing lexicography and indexing services, has developed a software solution that enables information specialists, who may be based in different organizational entities and geographic locations, to have real-time editorial access to centralized databases. The Synaptica software application can be accessed from anywhere in the world using standard web browsers. Each client project is hosted at a unique, secured web site and users are granted password-protected access. Synaptica supports the construction of ANSI/NISO Z39.19 [1] compliant electronic thesauri, and also has many additional components that integrate related tasks such as authority control and indexing. The presentation will examine The Gale Group as a case study and will discuss the practical issues of managing remote teams of lexicographers and indexers as well as illustrating the software functionality
  18. Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998) 0.04
    0.04282636 = product of:
      0.17130543 = sum of:
        0.17130543 = weight(_text_:java in 1871) [ClassicSimilarity], result of:
          0.17130543 = score(doc=1871,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.38932347 = fieldWeight in 1871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1871)
      0.25 = coord(1/4)
    
    Abstract
    As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
  19. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.04
    0.04282636 = product of:
      0.17130543 = sum of:
        0.17130543 = weight(_text_:java in 272) [ClassicSimilarity], result of:
          0.17130543 = score(doc=272,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.38932347 = fieldWeight in 272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=272)
      0.25 = coord(1/4)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
  20. Eddings, J.: How the Internet works (1994) 0.04
    0.04282636 = product of:
      0.17130543 = sum of:
        0.17130543 = weight(_text_:java in 2514) [ClassicSimilarity], result of:
          0.17130543 = score(doc=2514,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.38932347 = fieldWeight in 2514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2514)
      0.25 = coord(1/4)
    
    Abstract
    How the Internet Works promises "an exciting visual journey down the highways and byways of the Internet," and it delivers. The book's high quality graphics and simple, succinct text make it the ideal book for beginners; however it still has much to offer for Net vets. This book is jam- packed with cool ways to visualize how the Net works. The first section visually explores how TCP/IP, Winsock, and other Net connectivity mysteries work. This section also helps you understand how e-mail addresses and domains work, what file types mean, and how information travels across the Net. Part 2 unravels the Net's underlying architecture, including good information on how routers work and what is meant by client/server architecture. The third section covers your own connection to the Net through an Internet Service Provider (ISP), and how ISDN, cable modems, and Web TV work. Part 4 discusses e-mail, spam, newsgroups, Internet Relay Chat (IRC), and Net phone calls. In part 5, you'll find out how other Net tools, such as gopher, telnet, WAIS, and FTP, can enhance your Net experience. The sixth section takes on the World Wide Web, including everything from how HTML works to image maps and forms. Part 7 looks at other Web features such as push technology, Java, ActiveX, and CGI scripting, while part 8 deals with multimedia on the Net. Part 9 shows you what intranets are and covers groupware, and shopping and searching the Net. The book wraps up with part 10, a chapter on Net security that covers firewalls, viruses, cookies, and other Web tracking devices, plus cryptography and parental controls.

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 804
  • m 310
  • el 108
  • s 93
  • i 22
  • n 17
  • x 12
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications