Search (1243 results, page 3 of 63)

  • × language_ss:"e"
  1. White, M.J.: Patents and patent searching (2009) 0.07
    0.06728366 = product of:
      0.26913464 = sum of:
        0.26913464 = weight(_text_:hosted in 846) [ClassicSimilarity], result of:
          0.26913464 = score(doc=846,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.53456485 = fieldWeight in 846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.046875 = fieldNorm(doc=846)
      0.25 = coord(1/4)
    
    Abstract
    Patents are limited monopoly rights granted by governments that allow inventors to prevent others from making, using, or selling their inventions for up to 20 years. In exchange, inventors must disclose details about their inventions. Patent documents are a valuable open source of scientific and technical information, some of which does not appear in other types of publications. For more than 200 years patent offices have disseminated patent information to the public in order to promote awareness of patent rights and to further technological development. During that time patent documents have evolved from handwritten manuscripts to printed documents to electronic text. Print-based search tools such as indexes and patent classification manuals have given way to online databases and hyperlinked documents. Today, patent searchers can search and retrieve millions of patent documents from numerous free Web-based databases hosted by patent offices and independent organizations.
  2. Huang, M.-H.; Tang, M.-C.; Chen, D.-Z.: Inequality of publishing performance and international collaboration in physics (2011) 0.07
    0.06728366 = product of:
      0.26913464 = sum of:
        0.26913464 = weight(_text_:hosted in 467) [ClassicSimilarity], result of:
          0.26913464 = score(doc=467,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.53456485 = fieldWeight in 467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.046875 = fieldNorm(doc=467)
      0.25 = coord(1/4)
    
    Abstract
    Using a database of 1.4 million papers indexed by Web of Science, we examined the global trends in publication inequality and international collaboration in physics. The publication output and citations received by authors hosted in each country were taken into account. Although inequality decreased over time, further progress toward equality has somewhat abated in recent years. The skewedness of the global distribution in publication output was shown to be correlated with article impact, that is, the inequality is more significant in articles of higher impact. It was also observed that, despite the trend toward more equalitarian distribution, scholarly participation in physics is still determined by a select group. Particularly noteworthy has been China's rapid growth in publication outputs and a gradual improvement in its impact. Finally, the data also suggested regional differences in scientific collaboration. A distinctively high concentration of transnational collaboration and publication performance was found among EU countries.
  3. Orduna-Malea, E.; Thelwall, M.; Kousha, K.: Web citations in patents : evidence of technological impact? (2017) 0.07
    0.06728366 = product of:
      0.26913464 = sum of:
        0.26913464 = weight(_text_:hosted in 4764) [ClassicSimilarity], result of:
          0.26913464 = score(doc=4764,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.53456485 = fieldWeight in 4764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.046875 = fieldNorm(doc=4764)
      0.25 = coord(1/4)
    
    Abstract
    Patents sometimes cite webpages either as general background to the problem being addressed or to identify prior publications that limit the scope of the patent granted. Counts of the number of patents citing an organization's website may therefore provide an indicator of its technological capacity or relevance. This article introduces methods to extract URL citations from patents and evaluates the usefulness of counts of patent web citations as a technology indicator. An analysis of patents citing 200 US universities or 177 UK universities found computer science and engineering departments to be frequently cited, as well as research-related webpages, such as Wikipedia, YouTube, or the Internet Archive. Overall, however, patent URL citations seem to be frequent enough to be useful for ranking major US and the top few UK universities if popular hosted subdomains are filtered out, but the hit count estimates on the first search engine results page should not be relied upon for accuracy.
  4. Braeckman, J.: ¬The integration of library information into a campus wide information system (1996) 0.06
    0.0599569 = product of:
      0.2398276 = sum of:
        0.2398276 = weight(_text_:java in 729) [ClassicSimilarity], result of:
          0.2398276 = score(doc=729,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.5450528 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=729)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the development of Campus Wide Information Systems with reference to the work of Leuven University Library. A 4th phase can now be distinguished in the evolution of CWISs as they evolve towards Intranets. WWW technology is applied to organise a consistent interface to different types of information, databases and services within an institution. WWW servers now exist via which queries and query results are translated from the Web environment to the specific database query language and vice versa. The integration of Java will enable programs to be executed from within the Web environment. Describes each phase of CWIS development at KU Leuven
  5. Chang, S.-F.; Smith, J.R.; Meng, J.: Efficient techniques for feature-based image / video access and manipulations (1997) 0.06
    0.0599569 = product of:
      0.2398276 = sum of:
        0.2398276 = weight(_text_:java in 756) [ClassicSimilarity], result of:
          0.2398276 = score(doc=756,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.5450528 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=756)
      0.25 = coord(1/4)
    
    Abstract
    Describes 2 research projects aimed at studying the parallel issues of image and video indexing, information retrieval and manipulation: VisualSEEK, a content based image query system and a Java based WWW application supporting localised colour and spatial similarity retrieval; and CVEPS (Compressed Video Editing and Parsing System) which supports video manipulation with indexing support of individual frames from VisualSEEK and a hierarchical new video browsing and indexing system. In both media forms, these systems address the problem of heterogeneous unconstrained collections
  6. Lo, M.L.: Recent strategies for retrieving chemical structure information on the Web (1997) 0.06
    0.0599569 = product of:
      0.2398276 = sum of:
        0.2398276 = weight(_text_:java in 3611) [ClassicSimilarity], result of:
          0.2398276 = score(doc=3611,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.5450528 = fieldWeight in 3611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3611)
      0.25 = coord(1/4)
    
    Abstract
    Discusses various structural searching methods available on the Web. some databases such as the Brookhaven Protein Database use keyword searching which does not provide the desired substructure search capabilities. Others like CS ChemFinder and MDL's Chemscape use graphical plug in programs. Although plug in programs provide more capabilities, users first have to obtain a copy of the programs. Due to this limitation, Tripo's WebSketch and ACD Interactive Lab adopt a different approach. Using JAVA applets, users create and display a structure query of the molecule on the web page without using other software. The new technique is likely to extend itself to other electronic publications
  7. Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998) 0.06
    0.0599569 = product of:
      0.2398276 = sum of:
        0.2398276 = weight(_text_:java in 4287) [ClassicSimilarity], result of:
          0.2398276 = score(doc=4287,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.5450528 = fieldWeight in 4287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4287)
      0.25 = coord(1/4)
    
    Abstract
    Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet
  8. Priss, U.: ¬A graphical interface for conceptually navigating faceted thesauri (1998) 0.06
    0.0599569 = product of:
      0.2398276 = sum of:
        0.2398276 = weight(_text_:java in 658) [ClassicSimilarity], result of:
          0.2398276 = score(doc=658,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.5450528 = fieldWeight in 658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=658)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes a graphical interface for the navigation and construction of faceted thesauri that is based on formal concept analysis. Each facet of a thesaurus is represented as a mathematical lattice that is further subdivided into components. Users can graphically navigate through the Java implementation of the interface by clicking on terms that connect facets and components. Since there are many applications for thesauri in the knowledge representation field, such a graphical interface has the potential of being very useful
  9. Renehan, E.J.: Science on the Web : a connoisseur's guide to over 500 of the best, most useful, and most fun science Websites (1996) 0.06
    0.0599569 = product of:
      0.2398276 = sum of:
        0.2398276 = weight(_text_:java in 1211) [ClassicSimilarity], result of:
          0.2398276 = score(doc=1211,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.5450528 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1211)
      0.25 = coord(1/4)
    
    Abstract
    Written by the author of the best-selling 1001 really cool Web sites, this fun and informative book enables readers to take full advantage of the Web. More than a mere directory, it identifies and describes the best sites, guiding surfers to such innovations as VRML3-D and Java. Aside from downloads of Web browsers, Renehan points the way to free compilers and interpreters as well as free online access to major scientific journals
  10. Friedrich, M.; Schimkat, R.-D.; Küchlin, W.: Information retrieval in distributed environments based on context-aware, proactive documents (2002) 0.06
    0.0599569 = product of:
      0.2398276 = sum of:
        0.2398276 = weight(_text_:java in 4608) [ClassicSimilarity], result of:
          0.2398276 = score(doc=4608,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.5450528 = fieldWeight in 4608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4608)
      0.25 = coord(1/4)
    
    Abstract
    In this position paper we propose a document-centric middleware component called Living Documents to support context-aware information retrieval in distributed communities. A Living Document acts as a micro server for a document which contains computational services, a semi-structured knowledge repository to uniformly store and access context-related information, and finally the document's digital content. Our initial prototype of Living Documents is based an the concept of mobile agents and implemented in Java and XML.
  11. Hancock, B.; Giarlo, M.J.: Moving to XML : Latin texts XML conversion project at the Center for Electronic Texts in the Humanities (2001) 0.06
    0.0599569 = product of:
      0.2398276 = sum of:
        0.2398276 = weight(_text_:java in 5801) [ClassicSimilarity], result of:
          0.2398276 = score(doc=5801,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.5450528 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5801)
      0.25 = coord(1/4)
    
    Abstract
    The delivery of documents on the Web has moved beyond the restrictions of the traditional Web markup language, HTML. HTML's static tags cannot deal with the variety of data formats now beginning to be exchanged between various entities, whether corporate or institutional. XML solves many of the problems by allowing arbitrary tags, which describe the content for a particular audience or group. At the Center for Electronic Texts in the Humanities the Latin texts of Lector Longinquus are being transformed to XML in readiness for the expected new standard. To allow existing browsers to render these texts, a Java program is used to transform the XML to HTML on the fly.
  12. Calishain, T.; Dornfest, R.: Google hacks : 100 industrial-strength tips and tools (2003) 0.06
    0.05979252 = product of:
      0.11958504 = sum of:
        0.08565272 = weight(_text_:java in 134) [ClassicSimilarity], result of:
          0.08565272 = score(doc=134,freq=2.0), product of:
            0.44000798 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062434554 = queryNorm
            0.19466174 = fieldWeight in 134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
        0.033932324 = weight(_text_:und in 134) [ClassicSimilarity], result of:
          0.033932324 = score(doc=134,freq=32.0), product of:
            0.13847354 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.062434554 = queryNorm
            0.24504554 = fieldWeight in 134, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.4, S.253 (D. Lewandowski): "Mit "Google Hacks" liegt das bisher umfassendste Werk vor, das sich ausschließlich an den fortgeschrittenen Google-Nutzer wendet. Daher wird man in diesem Buch auch nicht die sonst üblichen Anfänger-Tips finden, die Suchmaschinenbücher und sonstige Anleitungen zur Internet-Recherche für den professionellen Nutzer in der Regel uninteressant machen. Mit Tara Calishain hat sich eine Autorin gefunden, die bereits seit nahezu fünf Jahren einen eigenen Suchmaschinen-Newsletter (www.researchbuzz.com) herausgibt und als Autorin bzw. Co-Autorin einige Bücher zum Thema Recherche verfasst hat. Für die Programmbeispiele im Buch ist Rael Dornfest verantwortlich. Das erste Kapitel ("Searching Google") gibt einen Einblick in erweiterte Suchmöglichkeiten und Spezifika der behandelten Suchmaschine. Dabei wird der Rechercheansatz der Autorin klar: die beste Methode sei es, die Zahl der Treffer selbst so weit einzuschränken, dass eine überschaubare Menge übrig bleibt, die dann tatsächlich gesichtet werden kann. Dazu werden die feldspezifischen Suchmöglichkeiten in Google erläutert, Tips für spezielle Suchen (nach Zeitschriftenarchiven, technischen Definitionen, usw.) gegeben und spezielle Funktionen der Google-Toolbar erklärt. Bei der Lektüre fällt positiv auf, dass auch der erfahrene Google-Nutzer noch Neues erfährt. Einziges Manko in diesem Kapitel ist der fehlende Blick über den Tellerrand: zwar ist es beispielsweise möglich, mit Google eine Datumssuche genauer als durch das in der erweiterten Suche vorgegebene Auswahlfeld einzuschränken; die aufgezeigte Lösung ist jedoch ausgesprochen umständlich und im Recherchealltag nur eingeschränkt zu gebrauchen. Hier fehlt der Hinweis, dass andere Suchmaschinen weit komfortablere Möglichkeiten der Einschränkung bieten. Natürlich handelt es sich bei dem vorliegenden Werk um ein Buch ausschließlich über Google, trotzdem wäre hier auch ein Hinweis auf die Schwächen hilfreich gewesen. In späteren Kapiteln werden durchaus auch alternative Suchmaschinen zur Lösung einzelner Probleme erwähnt. Das zweite Kapitel widmet sich den von Google neben der klassischen Websuche angebotenen Datenbeständen. Dies sind die Verzeichniseinträge, Newsgroups, Bilder, die Nachrichtensuche und die (hierzulande) weniger bekannten Bereichen Catalogs (Suche in gedruckten Versandhauskatalogen), Froogle (eine in diesem Jahr gestartete Shopping-Suchmaschine) und den Google Labs (hier werden von Google entwickelte neue Funktionen zum öffentlichen Test freigegeben). Nachdem die ersten beiden Kapitel sich ausführlich den Angeboten von Google selbst gewidmet haben, beschäftigt sich das Buch ab Kapitel drei mit den Möglichkeiten, die Datenbestände von Google mittels Programmierungen für eigene Zwecke zu nutzen. Dabei werden einerseits bereits im Web vorhandene Programme vorgestellt, andererseits enthält das Buch viele Listings mit Erläuterungen, um eigene Applikationen zu programmieren. Die Schnittstelle zwischen Nutzer und der Google-Datenbank ist das Google-API ("Application Programming Interface"), das es den registrierten Benutzern erlaubt, täglich bis zu 1.00o Anfragen über ein eigenes Suchinterface an Google zu schicken. Die Ergebnisse werden so zurückgegeben, dass sie maschinell weiterverarbeitbar sind. Außerdem kann die Datenbank in umfangreicherer Weise abgefragt werden als bei einem Zugang über die Google-Suchmaske. Da Google im Gegensatz zu anderen Suchmaschinen in seinen Benutzungsbedingungen die maschinelle Abfrage der Datenbank verbietet, ist das API der einzige Weg, eigene Anwendungen auf Google-Basis zu erstellen. Ein eigenes Kapitel beschreibt die Möglichkeiten, das API mittels unterschiedlicher Programmiersprachen wie PHP, Java, Python, usw. zu nutzen. Die Beispiele im Buch sind allerdings alle in Perl geschrieben, so dass es sinnvoll erscheint, für eigene Versuche selbst auch erst einmal in dieser Sprache zu arbeiten.
    Das sechste Kapitel enthält 26 Anwendungen des Google-APIs, die teilweise von den Autoren des Buchs selbst entwickelt wurden, teils von anderen Autoren ins Netz gestellt wurden. Als besonders nützliche Anwendungen werden unter anderem der Touchgraph Google Browser zur Visualisierung der Treffer und eine Anwendung, die eine Google-Suche mit Abstandsoperatoren erlaubt, vorgestellt. Auffällig ist hier, dass die interessanteren dieser Applikationen nicht von den Autoren des Buchs programmiert wurden. Diese haben sich eher auf einfachere Anwendungen wie beispielsweise eine Zählung der Treffer nach der Top-Level-Domain beschränkt. Nichtsdestotrotz sind auch diese Anwendungen zum großen Teil nützlich. In einem weiteren Kapitel werden pranks and games ("Streiche und Spiele") vorgestellt, die mit dem Google-API realisiert wurden. Deren Nutzen ist natürlich fragwürdig, der Vollständigkeit halber mögen sie in das Buch gehören. Interessanter wiederum ist das letzte Kapitel: "The Webmaster Side of Google". Hier wird Seitenbetreibern erklärt, wie Google arbeitet, wie man Anzeigen am besten formuliert und schaltet, welche Regeln man beachten sollte, wenn man seine Seiten bei Google plazieren will und letztlich auch, wie man Seiten wieder aus dem Google-Index entfernen kann. Diese Ausführungen sind sehr knapp gehalten und ersetzen daher keine Werke, die sich eingehend mit dem Thema Suchmaschinen-Marketing beschäftigen. Allerdings sind die Ausführungen im Gegensatz zu manch anderen Büchern zum Thema ausgesprochen seriös und versprechen keine Wunder in Bezug auf eine Plazienung der eigenen Seiten im Google-Index. "Google Hacks" ist auch denjenigen zu empfehlen, die sich nicht mit der Programmierung mittels des APIs beschäftigen möchten. Dadurch, dass es die bisher umfangreichste Sammlung von Tips und Techniken für einen gezielteren Umgang mit Google darstellt, ist es für jeden fortgeschrittenen Google-Nutzer geeignet. Zwar mögen einige der Hacks einfach deshalb mit aufgenommen worden sein, damit insgesamt die Zahl von i00 erreicht wird. Andere Tips bringen dafür klar erweiterte Möglichkeiten bei der Recherche. Insofern hilft das Buch auch dabei, die für professionelle Bedürfnisse leider unzureichende Abfragesprache von Google ein wenig auszugleichen." - Bergische Landeszeitung Nr.207 vom 6.9.2003, S.RAS04A/1 (Rundschau am Sonntag: Netzwelt) von P. Zschunke: Richtig googeln (s. dort)
  13. Thelwall, M.: ¬A layered approach for investigating the topological structure of communities in the Web (2003) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 5450) [ClassicSimilarity], result of:
          0.22427888 = score(doc=5450,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 5450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5450)
      0.25 = coord(1/4)
    
    Abstract
    A layered approach for identifying communities in the Web is presented and explored by applying the flake exact community identification algorithm to the UK academic Web. Although community or topic identification is a common task in information retrieval, a new perspective is developed by: the application of alternative document models, shifting the focus from individual pages to aggregated collections based upon Web directories, domains and entire sites; the removal of internal site links; and the adaptation of a new fast algorithm to allow fully-automated community identification using all possible single starting points. The overall topology of the graphs in the three least-aggregated layers was first investigated and found to include a large number of isolated points but, surprisingly, with most of the remainder being in one huge connected component, exact proportions varying by layer. The community identification process then found that the number of communities far exceeded the number of topological components, indicating that community identification is a potentially useful technique, even with random starting points. Both the number and size of communities identified was dependent on the parameter of the algorithm, with very different results being obtained in each case. In conclusion, the UK academic Web is embedded with layers of non-trivial communities and, if it is not unique in this, then there is the promise of improved results for information retrieval algorithms that can exploit this additional structure, and the application of the technique directly to partially automate Web metrics tasks such as that of finding all pages related to a given subject hosted by a single country's universities.
  14. Thelwall, M.: Results from a web impact factor crawler (2001) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 5490) [ClassicSimilarity], result of:
          0.22427888 = score(doc=5490,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 5490, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5490)
      0.25 = coord(1/4)
    
    Abstract
    Web impact factors, the proposed web equivalent of impact factors for journals, can be calculated by using search engines. It has been found that the results are problematic because of the variable coverage of search engines as well as their ability to give significantly different results over short periods of time. The fundamental problem is that although some search engines provide a functionality that is capable of being used for impact calculations, this is not their primary task and therefore they do not give guarantees as to performance in this respect. In this paper, a bespoke web crawler designed specifically for the calculation of reliable WIFs is presented. This crawler was used to calculate WIFs for a number of UK universities, and the results of these calculations are discussed. The principal findings were that with certain restrictions, WIFs can be calculated reliably, but do not correlate with accepted research rankings owing to the variety of material hosted on university servers. Changes to the calculations to improve the fit of the results to research rankings are proposed, but there are still inherent problems undermining the reliability of the calculation. These problems still apply if the WIF scores are taken on their own as indicators of the general impact of any area of the Internet, but with care would not apply to online journals.
  15. Yoshikane, F.; Kageura, K.; Tsuji, K.: ¬A method for the comparative analysis of concentration of author productivity, giving consideration to the effect of sample size dependency of statistical measures (2003) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 123) [ClassicSimilarity], result of:
          0.22427888 = score(doc=123,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=123)
      0.25 = coord(1/4)
    
    Abstract
    Studies of the concentration of author productivity based upon counts of papers by individual authors will produce measures that change systematically with sample size. Yoshikane, Kageura, and Tsuji seek a statistical framework which will avoid this scale effect problem. Using the number of authors in a field as an absolute concentration measure, and Gini's index as a relative concentration measure, they describe four literatures form both viewpoints with measures insensitive to one another. Both measures will increase with sample size. They then plot profiles of the two measures on the basis of a Monte-Carlo simulation of 1000 trials for 20 equally spaced intervals and compare the characteristics of the literatures. Using data from conferences hosted by four academic societies between 1992 and 1997, they find a coefficient of loss exceeding 0.15 indicating measures will depend highly on sample size. The simulation shows that a larger sample size leads to lower absolute concentration and higher relative concentration. Comparisons made at the same sample size present quite different results than the original data and allow direct comparison of population characteristics.
  16. Moed, H.F.: ¬The effect of "open access" on citation impact : an analysis of ArXiv's condensed matter section (2007) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 1621) [ClassicSimilarity], result of:
          0.22427888 = score(doc=1621,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 1621, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1621)
      0.25 = coord(1/4)
    
    Abstract
    This article statistically analyzes how the citation impact of articles deposited in the Condensed Matter section of the preprint server ArXiv (hosted by Cornell University), and subsequently published in a scientific journal, compares to that of articles in the same journal that were not deposited in the archive. Its principal aim is to further illustrate and roughly estimate the effect of two factors, early view and quality bias, on differences in citation impact between these two sets of papers, using citation data from Thomson Scientific's Web of Science. It presents estimates for a number of journals in the field of condensed matter physics. To discriminate between an open access effect and an early view effect, longitudinal citation data were analyzed covering a time period as long as 7 years. Quality bias was measured by calculating ArXiv citation impact differentials at the level of individual authors publishing in a journal, taking into account coauthorship. The analysis provided evidence of a strong quality bias and early view effect. Correcting for these effects, there is in a sample of six condensed matter physics journals studied in detail no sign of a general open access advantage of papers deposited in ArXiv. The study does provide evidence that ArXiv accelerates citation due to the fact that ArXiv makes papers available earlier rather than makes them freely available.
  17. Markey, K.; Swanson, F.; Jenkins, A.; Jennings, B.J.; St. Jean, B.; Rosenberg, V.; Yao, X.; Frost, R.L.: Designing and testing a web-based board game for teaching information literacy skills and concepts (2008) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 3609) [ClassicSimilarity], result of:
          0.22427888 = score(doc=3609,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 3609, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3609)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper seeks to focus on the design and testing of a web-based online board game for teaching undergraduate students information literacy skills and concepts. Design/methodology/approach - Project team members with expertise in game play, creative writing, programming, library research, graphic design and information seeking developed a web-based board game in which students used digital library resources to answer substantive questions on a scholarly topic. The project team hosted game play in a class of 75 undergraduate students. The instructor offered an extra-credit incentive to boost participation resulting in 49 students on 13 teams playing the game. Post-game focus group interviews revealed problematic features and redesign priorities. Findings - A total of six teams were successful meeting the criteria for the instructor's grade incentive achieving a 53.1 percent accuracy rate on their answers to substantive questions about the black death; 35.7 percent was the accuracy rate for the seven unsuccessful teams. Discussed in detail are needed improvements to problematic game features such as offline tasks, feedback, challenge functionality, and the game's black death theme. Originality/value - Information literacy games test what players already know. Because this project's successful teams answered substantive questions about the black death at accuracy rates 20 points higher than the estimated probability of guessing, students did the research during game play which demonstrates that games have merit for teaching students information literacy skills and concepts.
  18. Smiraglia, R.P.: ISKO 12's bookshelf - evolving intension : an editorial (2013) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 1636) [ClassicSimilarity], result of:
          0.22427888 = score(doc=1636,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 1636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1636)
      0.25 = coord(1/4)
    
    Abstract
    The 2012 biennial international research conference of the International Society for Knowledge Organization was held August 6-9, in Mysore, India. It was the second international ISKO conference to be held in India (Canada and India are the only countries to have hosted two international ISKO conferences), and for many attendees travel to the exotic Indian subcontinent was a new experience. Interestingly, the mix of people attending was quite different from recent meetings held in Europe or North America. The conference was lively and, as usual, jam-packed with new research. Registration took place on a veranda in the garden of the B. N. Bahadur Institute of Management Sciences where the meetings were held at the University of Mysore. This graceful tree (Figure 1) kept us company and kept watch over our considerations (as indeed it does over the academic enterprise of the Institute). The conference theme was "Categories, Contexts and Relations in Knowledge Organization." The opening and closing sessions fittingly were devoted to serious introspection about the direction of the domain of knowledge organization. This editorial, in line with those following past international conferences, is an attempt to comment on the state of the domain by reflecting domain-analytically on the proceedings of the conference, primarily using bibliometric measures. In general, it seems the domain is secure in its intellectual moorings, as it continues to welcome a broad granular array of shifting research questionsin its intension. It seems that the continual concretizing of the theoretical core of knowledge organization (KO) seems to act as a catalyst for emergent ideas, which can be observed as part of the evolving intension of the domain.
  19. Sugimoto, C.R.; Thelwall, M.: Scholars on soap boxes : science communication and dissemination in TED videos (2013) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 1678) [ClassicSimilarity], result of:
          0.22427888 = score(doc=1678,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 1678, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1678)
      0.25 = coord(1/4)
    
    Abstract
    Online videos provide a novel, and often interactive, platform for the popularization of science. One successful collection is hosted on the TED (Technology, Entertainment, Design) website. This study uses a range of bibliometric (citation) and webometric (usage and bookmarking) indicators to examine TED videos in order to provide insights into the type and scope of their impact. The results suggest that TED Talks impact primarily the public sphere, with about three-quarters of a billion total views, rather than the academic realm. Differences were found among broad disciplinary areas, with art and design videos having generally lower levels of impact but science and technology videos generating otherwise average impact for TED. Many of the metrics were only loosely related, but there was a general consensus about the most popular videos as measured through views or comments on YouTube and the TED site. Moreover, most videos were found in at least one online syllabus and videos in online syllabi tended to be more viewed, discussed, and blogged. Less-liked videos generated more discussion, although this may be because they are more controversial. Science and technology videos presented by academics were more liked than those by nonacademics, showing that academics are not disadvantaged in this new media environment.
  20. Schutz, A.; Buitelaar, P.: RelExt: a tool for relation extraction from text in ontology extension (2005) 0.06
    0.05606972 = product of:
      0.22427888 = sum of:
        0.22427888 = weight(_text_:hosted in 2078) [ClassicSimilarity], result of:
          0.22427888 = score(doc=2078,freq=2.0), product of:
            0.5034649 = queryWeight, product of:
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.062434554 = queryNorm
            0.44547075 = fieldWeight in 2078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.063882 = idf(docFreq=37, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2078)
      0.25 = coord(1/4)
    
    Abstract
    Domain ontologies very rarely model verbs as relations holding between concepts. However, the role of the verb as a central connecting element between concepts is undeniable. Verbs specify the interaction between the participants of some action or event by expressing relations between them. In parallel, it can be argued from an ontology engineering point of view that verbs express a relation between two classes that specify domain and range. The work described here is concerned with relation extraction for ontology extension along these lines. We describe a system (RelExt) that is capable of automatically identifying highly relevant triples (pairs of concepts connected by a relation) over concepts from an existing ontology. RelExt works by extracting relevant verbs and their grammatical arguments (i.e. terms) from a domain-specific text collection and computing corresponding relations through a combination of linguistic and statistical processing. The paper includes a detailed description of the system architecture and evaluation results on a constructed benchmark. RelExt has been developed in the context of the SmartWeb project, which aims at providing intelligent information services via mobile broadband devices on the FIFA World Cup that will be hosted in Germany in 2006. Such services include location based navigational information as well as question answering in the football domain.

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 804
  • m 310
  • el 108
  • s 93
  • i 22
  • n 17
  • x 12
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications