-
Craven, T.: Changes in metatag descriptions over time (2001)
0.07
0.074270464 = product of:
0.29708186 = sum of:
0.29708186 = weight(_text_:home in 601) [ClassicSimilarity], result of:
0.29708186 = score(doc=601,freq=4.0), product of:
0.4218467 = queryWeight, product of:
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.06551658 = queryNorm
0.7042413 = fieldWeight in 601, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.0546875 = fieldNorm(doc=601)
0.25 = coord(1/4)
- Abstract
- Four sets of Web pages previously visited in the summer of 2000 were revisited one year later. Of 707 pages containing metatag descriptions in 2000, 586 retained descriptions in 2001, and, of 1,230 pages lacking descriptions in 2000, 101 had descriptions in 2001. Home pages appeared to both lose and change descriptions more than other pages, with about 19% of descriptions changed in the two sets where home pages predominated versus about 12% in the other two sets. About two-thirds of changes involved minor revisions, and changes fell into a wide variety of categories. Some implications for software to assist in description revision are discussed
-
Reed, D.: Essential HTML fast (1997)
0.07
0.07190471 = product of:
0.28761885 = sum of:
0.28761885 = weight(_text_:java in 6851) [ClassicSimilarity], result of:
0.28761885 = score(doc=6851,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.62291753 = fieldWeight in 6851, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=6851)
0.25 = coord(1/4)
- Abstract
- This book provides a quick, concise guide to the issues surrounding the preparation of a well-designed, professional web site using HTML. Topics covered include: how to plan your web site effectively, effective use of hypertext, images, audio and video; layout techniques using tables and and list; how to use style sheets, font sizes and plans for mathematical equation make up. Integration of CGI scripts, Java and ActiveX into your web site is also discussed
-
Lord Wodehouse: ¬The Intranet : the quiet (r)evolution (1997)
0.07
0.07190471 = product of:
0.28761885 = sum of:
0.28761885 = weight(_text_:java in 171) [ClassicSimilarity], result of:
0.28761885 = score(doc=171,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.62291753 = fieldWeight in 171, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=171)
0.25 = coord(1/4)
- Abstract
- Explains how the Intranet (in effect an Internet limited to the computer systems of a single organization) developed out of the Internet, and what its uses and advantages are. Focuses on the Intranet developed in the Glaxo Wellcome organization. Briefly discusses a number of technologies in development, e.g. Java, Real audio, 3D and VRML, and summarizes the issues involved in the successful development of the Intranet, that is, bandwidth, searching tools, security, and legal issues
-
Wang, J.; Reid, E.O.F.: Developing WWW information systems on the Internet (1996)
0.07
0.07190471 = product of:
0.28761885 = sum of:
0.28761885 = weight(_text_:java in 604) [ClassicSimilarity], result of:
0.28761885 = score(doc=604,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.62291753 = fieldWeight in 604, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=604)
0.25 = coord(1/4)
- Abstract
- Gives an overview of Web information system development. Discusses some basic concepts and technologies such as HTML, HTML FORM, CGI and Java, which are associated with developing WWW information systems. Further discusses the design and implementation of Virtual Travel Mart, a Web based end user oriented travel information system. Finally, addresses some issues in developing WWW information systems
-
Ameritech releases Dynix WebPac on NT (1998)
0.07
0.07190471 = product of:
0.28761885 = sum of:
0.28761885 = weight(_text_:java in 2782) [ClassicSimilarity], result of:
0.28761885 = score(doc=2782,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.62291753 = fieldWeight in 2782, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=2782)
0.25 = coord(1/4)
- Abstract
- Ameritech Library Services has released Dynix WebPac on NT, which provides access to a Dynix catalogue from any Java compatible Web browser. Users can place holds, cancel and postpone holds, view and renew items on loan and sort and limit search results from the Web. Describes some of the other features of Dynix WebPac
-
OCLC completes SiteSearch 4.0 field test (1998)
0.07
0.07190471 = product of:
0.28761885 = sum of:
0.28761885 = weight(_text_:java in 3078) [ClassicSimilarity], result of:
0.28761885 = score(doc=3078,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.62291753 = fieldWeight in 3078, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=3078)
0.25 = coord(1/4)
- Abstract
- OCLC has announced that 6 library systems have completed field tests of the OCLC SiteSearch 4.0 suite of software, paving its way for release. Traces the beta site testing programme from its beginning in November 1997 and notes that OCLC SiteServer components have been written in Java programming language which will increase libraries' ability to extend the functionality of the SiteSearch software to create new features specific to local needs
-
Robinson, D.A.; Lester, C.R.; Hamilton, N.M.: Delivering computer assisted learning across the WWW (1998)
0.07
0.07190471 = product of:
0.28761885 = sum of:
0.28761885 = weight(_text_:java in 4618) [ClassicSimilarity], result of:
0.28761885 = score(doc=4618,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.62291753 = fieldWeight in 4618, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=4618)
0.25 = coord(1/4)
- Abstract
- Demonstrates a new method of providing networked computer assisted learning to avoid the pitfalls of traditional methods. This was achieved using Web pages enhanced with Java applets, MPEG video clips and Dynamic HTML
-
Bates, C.: Web programming : building Internet applications (2000)
0.07
0.07190471 = product of:
0.28761885 = sum of:
0.28761885 = weight(_text_:java in 130) [ClassicSimilarity], result of:
0.28761885 = score(doc=130,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.62291753 = fieldWeight in 130, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=130)
0.25 = coord(1/4)
- Object
- Java
-
Zschunke, P.: Richtig googeln : Ein neues Buch hilft, alle Möglichkeiten der populären Suchmaschine zu nutzen (2003)
0.07
0.07164297 = product of:
0.14328595 = sum of:
0.10785706 = weight(_text_:java in 55) [ClassicSimilarity], result of:
0.10785706 = score(doc=55,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.23359407 = fieldWeight in 55, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0234375 = fieldNorm(doc=55)
0.035428878 = weight(_text_:und in 55) [ClassicSimilarity], result of:
0.035428878 = score(doc=55,freq=22.0), product of:
0.14530917 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06551658 = queryNorm
0.24381724 = fieldWeight in 55, product of:
4.690416 = tf(freq=22.0), with freq of:
22.0 = termFreq=22.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0234375 = fieldNorm(doc=55)
0.5 = coord(2/4)
- Content
- "Fünf Jahre nach seiner Gründung ist Google zum Herz des weltweiten Computernetzes geworden. Mit seiner Konzentration aufs Wesentliche hat die Suchmaschine alle anderen Anbieter weit zurück gelassen. Aber Google kann viel mehr, als im Web nach Texten und Bildern zu suchen. Gesammelt und aufbereitet werden auch Beiträge in Diskussionsforen (Newsgroups), aktuelle Nachrichten und andere im Netz verfügbare Informationen. Wer sich beim "Googeln" darauf beschränkt, ein einziges Wort in das Suchformular einzutippen und dann die ersten von oft mehreren hunderttausend Treffern anzuschauen, nutzt nur einen winzigen Bruchteil der Möglichkeiten. Wie man Google bis zum letzten ausreizt, haben Tara Calishain und Rael Dornfest in einem bislang nur auf Englisch veröffentlichten Buch dargestellt (Tara Calishain/Rael Dornfest: Google Hacks", www.oreilly.de, 28 Euro. Die wichtigsten Praxistipps kosten als Google Pocket Guide 12 Euro). - Suchen mit bis zu zehn Wörtern - Ihre "100 Google Hacks" beginnen mit Google-Strategien wie der Kombination mehrerer Suchbegriffe und enden mit der Aufforderung zur eigenen Nutzung der Google API ("Application Programming Interface"). Diese Schnittstelle kann zur Entwicklung von eigenen Programmen eingesetzt werden,,die auf die Google-Datenbank mit ihren mehr als drei Milliarden Einträgen zugreifen. Ein bewussteres Suchen im Internet beginnt mit der Kombination mehrerer Suchbegriffe - bis zu zehn Wörter können in das Formularfeld eingetippt werden, welche Google mit dem lo-gischen Ausdruck "und" verknüpft. Diese Standardvorgabe kann mit einem dazwischen eingefügten "or" zu einer Oder-Verknüpfung geändert werden. Soll ein bestimmter Begriff nicht auftauchen, wird ein Minuszeichen davor gesetzt. Auf diese Weise können bei einer Suche etwa alle Treffer ausgefiltert werden, die vom Online-Buchhändler Amazon kommen. Weiter gehende Syntax-Anweisungen helfen ebenfalls dabei, die Suche gezielt einzugrenzen: Die vorangestellte Anweisung "intitle:" etwa (ohne Anführungszeichen einzugeben) beschränkt die Suche auf all diejenigen Web-Seiten, die den direkt danach folgenden Begriff in ihrem Titel aufführen. Die Computer von Google bewältigen täglich mehr als 200 Millionen Anfragen. Die Antworten kommen aus einer Datenbank, die mehr als drei Milliarden Einträge enthält und regelmäßig aktualisiert wird. Dazu Werden SoftwareRoboter eingesetzt, so genannte "Search-Bots", die sich die Hyperlinks auf Web-Seiten entlang hangeln und für jedes Web-Dokument einen Index zur Volltextsuche anlegen. Die Einnahmen des 1998 von Larry Page und Sergey Brin gegründeten Unternehmens stammen zumeist von Internet-Portalen, welche die GoogleSuchtechnik für ihre eigenen Dienste übernehmen. Eine zwei Einnahmequelle ist die Werbung von Unternehmen, die für eine optisch hervorgehobene Platzierung in den GoogleTrefferlisten zahlen. Das Unternehmen mit Sitz im kalifornischen Mountain View beschäftigt rund 800 Mitarbeiter. Der Name Google leitet sich ab von dem Kunstwort "Googol", mit dem der amerikanische Mathematiker Edward Kasner die unvorstellbar große Zahl 10 hoch 100 (eine 1 mit hundert Nullen) bezeichnet hat. Kommerzielle Internet-Anbieter sind sehr, daran interessiert, auf den vordersten Plätzen einer Google-Trefferliste zu erscheinen.
Da Google im Unterschied zu Yahoo oder Lycos nie ein auf möglichst viele Besuche angelegtes Internet-Portal werden wollte, ist die Suche in der Datenbank auch außerhalb der Google-Web-Site möglich. Dafür gibt es zunächst die "Google Toolbar" für den Internet Explorer, mit der dieser Browser eine eigene Leiste, für die Google-Suche erhält. Freie Entwickler bieten im Internet eine eigene Umsetzung: dieses Werkzeugs auch für den Netscape/ Mozilla-Browser an. Daneben kann ein GoogleSucheingabefeld aber auch auf die eigene WebSeite platziert werden - dazu sind nur vier Zei-len HTML-Code nötig. Eine Google-Suche zu starten, ist übrigens auch ganz ohne Browser möglich. Dazu hat das Unternehmen im Aprilvergangenen Jahres die API ("Application Programming Interface") frei gegeben, die in eigene Programme' eingebaut wird. So kann man etwa eine Google-Suche mit einer E-Mail starten: Die Suchbegriffe werden in die Betreff Zeile einer ansonsten leeren EMail eingetragen, die an die Adresse google@capeclear.com geschickt wird. Kurz danach trifft eine automatische Antwort-Mail mit den ersten zehn Treffern ein. Die entsprechenden Kenntnisse vorausgesetzt, können Google-Abfragen auch in Web-Services eingebaut werden - das sind Programme, die Daten aus dem Internet verarbeiten. Als Programmiertechniken kommen dafür Perl, PHP, Python oder Java in Frage. Calishain und Dornfest stellen sogar eine Reihe von abgedrehten Sites vor, die solche Programme für abstrakte Gedichte oder andere Kunstwerke einsetzen."
-
Wu, Y.: Organization of complex topics in comprehensive classification schemes : case studies of disaster and security (2023)
0.06
0.06497312 = product of:
0.2598925 = sum of:
0.2598925 = weight(_text_:home in 2119) [ClassicSimilarity], result of:
0.2598925 = score(doc=2119,freq=6.0), product of:
0.4218467 = queryWeight, product of:
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.06551658 = queryNorm
0.6160828 = fieldWeight in 2119, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.0390625 = fieldNorm(doc=2119)
0.25 = coord(1/4)
- Abstract
- This research investigates how comprehensive classifications and home-grown classifications organize complex topics. Two comprehensive classifications and two home-grown taxonomies are used to examine two complex topics: disaster and security. The two comprehensive classifications are the Library of Congress Classification and the Classification Scheme for Chinese Libraries. The two home-grown taxonomies are AIRS 211 LA County Taxonomy of Human Services - Disaster Services, and the Human Security Taxonomy. It is found that a comprehensive classification may provide many subclasses of a complex topic, which are scattered in various classes. Occasionally the classification scheme may provide several small taxonomies that organize the terms of a subclass of the complex topic that are pulled from multiple classes. However, the comprehensive classification provides no organization of the major subclasses of the complex topic. The lack of organization of the major subclasses of the complex topic may prevent users from understanding the complex topic systematically, and so preventing them from selecting an appropriate classification term for the complex topic. Ideally a comprehensive classification should provide a high-level conceptual framework for the complex topic, or at least organize the major subclasses in a way that help users understand the complex topic systematically.
-
Ryan, S.; Leith, D.: Training with the web : Internet training in an academic library environment (1995)
0.06
0.0636604 = product of:
0.2546416 = sum of:
0.2546416 = weight(_text_:home in 2483) [ClassicSimilarity], result of:
0.2546416 = score(doc=2483,freq=4.0), product of:
0.4218467 = queryWeight, product of:
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.06551658 = queryNorm
0.6036354 = fieldWeight in 2483, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.046875 = fieldNorm(doc=2483)
0.25 = coord(1/4)
- Abstract
- Describes the first phase of an Internet training programme, presented to academic staff at Sydnay University, New South Wales, which included a brief introduction and comprehensive review of the Internet, using NCSA Mosaic and Netscape software as preseantation tools. The programme used locally produced Hypertext Markup Language (HTML) documents with live and 'canned' links to Internet tools and resources. Participants were presented with a 'things to see' home page on individual workstations and were free to explore areas of interest using this home page as a starting point. They were also provided with their own Mac and DOS discs as handouts with a World Wide Web (WWW) browser and local HTML documents, some of which contained links to Internet tools and resources. An evaluation of the programme indicated the success of the WWW browsers as an aid to Internet training
-
Cree, J.S.: Data conversion and migration at the libraries of the Home Office and the Department of the Environment (1997)
0.06
0.0636604 = product of:
0.2546416 = sum of:
0.2546416 = weight(_text_:home in 3175) [ClassicSimilarity], result of:
0.2546416 = score(doc=3175,freq=4.0), product of:
0.4218467 = queryWeight, product of:
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.06551658 = queryNorm
0.6036354 = fieldWeight in 3175, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.046875 = fieldNorm(doc=3175)
0.25 = coord(1/4)
- Abstract
- Describes the experience of data conversion and migration at the libraries of the Home Office (HO) and the Dept. of the Environment (DoE), UK. Both HO and DoE libraries had changed from Anglo-American code cataloguing to AACR2 cataloguing in the mid-1970s. Both libraries were selective in identifying records for conversion initially to BLAISE-LOCAS. Conversion to integrated library systems from BLAISE-LOCAS MARC tapes produced problems in both libraries with location/holdings fields which were largely resolved at HO, but not resolved at DoE. HO experienced problems converting to a system with fixed field lengths. HO converted subject keywords to form a rudimentary, non-standard thesaurus which required the addition of Broader Term and Narrower Term to meet the challenge of computerized searching. DoE converted a non-thesaurus subject index to an authority file, but continued to maintain the index on a stand-alone DataEase application for use by cataloguers. Neither library converted acquisitions data
-
Veltman, K.H.: From Recorded World to Recording Worlds (2007)
0.06
0.06331006 = product of:
0.12662011 = sum of:
0.021585818 = weight(_text_:und in 1512) [ClassicSimilarity], result of:
0.021585818 = score(doc=1512,freq=6.0), product of:
0.14530917 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06551658 = queryNorm
0.14855097 = fieldWeight in 1512, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.02734375 = fieldNorm(doc=1512)
0.1050343 = weight(_text_:home in 1512) [ClassicSimilarity], result of:
0.1050343 = score(doc=1512,freq=2.0), product of:
0.4218467 = queryWeight, product of:
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.06551658 = queryNorm
0.2489869 = fieldWeight in 1512, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.4387774 = idf(docFreq=192, maxDocs=44421)
0.02734375 = fieldNorm(doc=1512)
0.5 = coord(2/4)
- Abstract
- The range, depths and limits of what we know depend on the media with which we attempt to record our knowledge. This essay begins with a brief review of developments in a) media: stone, manuscripts, books and digital media, to trace how collections of recorded knowledge expanded to 235,000 in 1837 and have expanded to over 100 million unique titles in a single database including over 1 billion individual listings in 2007. The advent of digital media has brought full text scanning and electronic networks, which enable us to consult digital books and images from our office, home or potentially even with our cell phones. These magnificent developments raise a number of concerns and new challenges. An historical survey of major projects that changed the world reveals that they have taken from one to eight centuries. This helps explain why commercial offerings, which offer useful, and even profitable short-term solutions often undermine a long-term vision. New technologies have the potential to transform our approach to knowledge, but require a vision of a systematic new approach to knowledge. This paper outlines four ingredients for such a vision in the European context. First, the scope of European observatories should be expanded to inform memory institutions of latest technological developments. Second, the quest for a European Digital Library should be expanded to include a distributed repository, a digital reference room and a virtual agora, whereby memory institutions will be linked with current research;. Third, there is need for an institute on Knowledge Organization that takes up anew Otlet's vision, and the pioneering efforts of the Mundaneum (Brussels) and the Bridge (Berlin). Fourth, we need to explore requirements for a Universal Digital Library, which works with countries around the world rather than simply imposing on them an external system. Here, the efforts of the proposed European University of Culture could be useful. Ultimately we need new systems, which open research into multiple ways of knowing, multiple "knowledges". In the past, we went to libraries to study the recorded world. In a world where cameras and sensors are omnipresent we have new recording worlds. In future, we may also use these recording worlds to study the riches of libraries.
- Content
- Vgl. Hinweis in: Online-Mitteilungen 2007, Nr.91 [=Mitt. VOEB 60(2007) H.3], S.15: "Auf der Tagung "Herausforderung: Digitale Langzeitarchivierung - Strategien und Praxis europäischer Kooperation" welche vom 20. bis 21. April 2007 in der Deutschen Nationalbibliothek (Frankfurt am Main) stattfand, befassten sich die einzelnen Referentinnen nicht nur mit der Bewahrung des Kulturgutes, sondern u.a. auch mit der "Aufzeichnung der Welten". Wie man diese "Weltaufzeichnung" in Anbetracht der Fülle und stetigen Zunahme an Informationen zukünftig (noch) besser bewältigen kann, thematisierte Kim H. Veltman in seinem Vortrag. Er präsentierte dazu vier äußerst denkwürdige Ansätze: - Schaffung einerzentralen europäischen Instanz, welche die Gedächtnisinstitutionen über die neusten technologischen Entwicklungen informiert - Errichtung eines digitalen Referenzraums und einer virtuellen Agora innerhalb der Europäischen Digitalen Bibliothek - Gründung eines Instituts zur Wissensorganisation - Erforschen der Anforderungen für eine "Universal Digital Library"."
-
Braeckman, J.: ¬The integration of library information into a campus wide information system (1996)
0.06
0.06291662 = product of:
0.2516665 = sum of:
0.2516665 = weight(_text_:java in 729) [ClassicSimilarity], result of:
0.2516665 = score(doc=729,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.5450528 = fieldWeight in 729, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=729)
0.25 = coord(1/4)
- Abstract
- Discusses the development of Campus Wide Information Systems with reference to the work of Leuven University Library. A 4th phase can now be distinguished in the evolution of CWISs as they evolve towards Intranets. WWW technology is applied to organise a consistent interface to different types of information, databases and services within an institution. WWW servers now exist via which queries and query results are translated from the Web environment to the specific database query language and vice versa. The integration of Java will enable programs to be executed from within the Web environment. Describes each phase of CWIS development at KU Leuven
-
Chang, S.-F.; Smith, J.R.; Meng, J.: Efficient techniques for feature-based image / video access and manipulations (1997)
0.06
0.06291662 = product of:
0.2516665 = sum of:
0.2516665 = weight(_text_:java in 756) [ClassicSimilarity], result of:
0.2516665 = score(doc=756,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.5450528 = fieldWeight in 756, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=756)
0.25 = coord(1/4)
- Abstract
- Describes 2 research projects aimed at studying the parallel issues of image and video indexing, information retrieval and manipulation: VisualSEEK, a content based image query system and a Java based WWW application supporting localised colour and spatial similarity retrieval; and CVEPS (Compressed Video Editing and Parsing System) which supports video manipulation with indexing support of individual frames from VisualSEEK and a hierarchical new video browsing and indexing system. In both media forms, these systems address the problem of heterogeneous unconstrained collections
-
Lo, M.L.: Recent strategies for retrieving chemical structure information on the Web (1997)
0.06
0.06291662 = product of:
0.2516665 = sum of:
0.2516665 = weight(_text_:java in 3611) [ClassicSimilarity], result of:
0.2516665 = score(doc=3611,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.5450528 = fieldWeight in 3611, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=3611)
0.25 = coord(1/4)
- Abstract
- Discusses various structural searching methods available on the Web. some databases such as the Brookhaven Protein Database use keyword searching which does not provide the desired substructure search capabilities. Others like CS ChemFinder and MDL's Chemscape use graphical plug in programs. Although plug in programs provide more capabilities, users first have to obtain a copy of the programs. Due to this limitation, Tripo's WebSketch and ACD Interactive Lab adopt a different approach. Using JAVA applets, users create and display a structure query of the molecule on the web page without using other software. The new technique is likely to extend itself to other electronic publications
-
Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998)
0.06
0.06291662 = product of:
0.2516665 = sum of:
0.2516665 = weight(_text_:java in 4287) [ClassicSimilarity], result of:
0.2516665 = score(doc=4287,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.5450528 = fieldWeight in 4287, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=4287)
0.25 = coord(1/4)
- Abstract
- Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet
-
Priss, U.: ¬A graphical interface for conceptually navigating faceted thesauri (1998)
0.06
0.06291662 = product of:
0.2516665 = sum of:
0.2516665 = weight(_text_:java in 658) [ClassicSimilarity], result of:
0.2516665 = score(doc=658,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.5450528 = fieldWeight in 658, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=658)
0.25 = coord(1/4)
- Abstract
- This paper describes a graphical interface for the navigation and construction of faceted thesauri that is based on formal concept analysis. Each facet of a thesaurus is represented as a mathematical lattice that is further subdivided into components. Users can graphically navigate through the Java implementation of the interface by clicking on terms that connect facets and components. Since there are many applications for thesauri in the knowledge representation field, such a graphical interface has the potential of being very useful
-
Renehan, E.J.: Science on the Web : a connoisseur's guide to over 500 of the best, most useful, and most fun science Websites (1996)
0.06
0.06291662 = product of:
0.2516665 = sum of:
0.2516665 = weight(_text_:java in 1211) [ClassicSimilarity], result of:
0.2516665 = score(doc=1211,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.5450528 = fieldWeight in 1211, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=1211)
0.25 = coord(1/4)
- Abstract
- Written by the author of the best-selling 1001 really cool Web sites, this fun and informative book enables readers to take full advantage of the Web. More than a mere directory, it identifies and describes the best sites, guiding surfers to such innovations as VRML3-D and Java. Aside from downloads of Web browsers, Renehan points the way to free compilers and interpreters as well as free online access to major scientific journals
-
Friedrich, M.; Schimkat, R.-D.; Küchlin, W.: Information retrieval in distributed environments based on context-aware, proactive documents (2002)
0.06
0.06291662 = product of:
0.2516665 = sum of:
0.2516665 = weight(_text_:java in 4608) [ClassicSimilarity], result of:
0.2516665 = score(doc=4608,freq=2.0), product of:
0.4617286 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06551658 = queryNorm
0.5450528 = fieldWeight in 4608, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=4608)
0.25 = coord(1/4)
- Abstract
- In this position paper we propose a document-centric middleware component called Living Documents to support context-aware information retrieval in distributed communities. A Living Document acts as a micro server for a document which contains computational services, a semi-structured knowledge repository to uniformly store and access context-related information, and finally the document's digital content. Our initial prototype of Living Documents is based an the concept of mobile agents and implemented in Java and XML.