-
Judge, A.J.N.: Strategic correspondences : computer-aided insight scaffolding (1996)
0.09
0.08508619 = product of:
0.34034476 = sum of:
0.34034476 = weight(_text_:judge in 3816) [ClassicSimilarity], result of:
0.34034476 = score(doc=3816,freq=2.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.68334585 = fieldWeight in 3816, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0625 = fieldNorm(doc=3816)
0.25 = coord(1/4)
-
Rader, H.B.: User education and information literacy for the next decade : an international perspective (1995)
0.08
0.08487223 = product of:
0.33948892 = sum of:
0.33948892 = weight(_text_:handling in 5416) [ClassicSimilarity], result of:
0.33948892 = score(doc=5416,freq=6.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.84019136 = fieldWeight in 5416, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.0546875 = fieldNorm(doc=5416)
0.25 = coord(1/4)
- Abstract
- In the information age marked by the global highways and instant information handling sharing worldwide, all citizens must become knowledgeable about, and efficient in, handling information. People need training in how to organize, evaluate, and analyze the enormous array of information now available in both print and electronic formats. Information skills need to be taught and developed on all levels from elementary schools thorugh universities. Librarians worldwide are uniquely qualified through education, training, and experience to provide people with necessary information-handling skills on all levels. Using available data regarding information literacy programs on the international level, Rader proposes a course of action for the next decade
-
Lukasiewicz, T.: Uncertainty reasoning for the Semantic Web (2017)
0.08
0.08487223 = product of:
0.33948892 = sum of:
0.33948892 = weight(_text_:handling in 4939) [ClassicSimilarity], result of:
0.33948892 = score(doc=4939,freq=6.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.84019136 = fieldWeight in 4939, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.0546875 = fieldNorm(doc=4939)
0.25 = coord(1/4)
- Abstract
- The Semantic Web has attracted much attention, both from academia and industry. An important role in research towards the Semantic Web is played by formalisms and technologies for handling uncertainty and/or vagueness. In this paper, I first provide some motivating examples for handling uncertainty and/or vagueness in the Semantic Web. I then give an overview of some own formalisms for handling uncertainty and/or vagueness in the Semantic Web.
-
Robinson, B.: Electronic document handling using SGML (1994)
0.08
0.08400172 = product of:
0.33600688 = sum of:
0.33600688 = weight(_text_:handling in 1039) [ClassicSimilarity], result of:
0.33600688 = score(doc=1039,freq=2.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.8315737 = fieldWeight in 1039, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.09375 = fieldNorm(doc=1039)
0.25 = coord(1/4)
-
Robinson, B.: Electronic document handling using SGML : hypertext interchange and SGML (1994)
0.08
0.08400172 = product of:
0.33600688 = sum of:
0.33600688 = weight(_text_:handling in 1040) [ClassicSimilarity], result of:
0.33600688 = score(doc=1040,freq=2.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.8315737 = fieldWeight in 1040, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.09375 = fieldNorm(doc=1040)
0.25 = coord(1/4)
-
Barberá, J.: ¬The Intranet : a new concept for corporate information handling (1996)
0.08
0.08400172 = product of:
0.33600688 = sum of:
0.33600688 = weight(_text_:handling in 105) [ClassicSimilarity], result of:
0.33600688 = score(doc=105,freq=2.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.8315737 = fieldWeight in 105, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.09375 = fieldNorm(doc=105)
0.25 = coord(1/4)
-
Liang, Z.; Mao, J.; Li, G.: Bias against scientific novelty : a prepublication perspective (2023)
0.08
0.08400172 = product of:
0.33600688 = sum of:
0.33600688 = weight(_text_:handling in 1846) [ClassicSimilarity], result of:
0.33600688 = score(doc=1846,freq=8.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.8315737 = fieldWeight in 1846, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.046875 = fieldNorm(doc=1846)
0.25 = coord(1/4)
- Abstract
- Novel ideas often experience resistance from incumbent forces. While evidence of the bias against novelty has been widely identified in science, there is still a lack of large-scale quantitative work to study this problem occurring in the prepublication process of manuscripts. This paper examines the association between manuscript novelty and handling time of publication based on 778,345 articles in 1,159 journals indexed by PubMed. Measuring the novelty as the extent to which manuscripts disrupt existing knowledge, we found systematic evidence that higher novelty is associated with longer handling time. Matching and fixed-effect models were adopted to confirm the statistical significance of this pattern. Moreover, submissions from prestigious authors and institutions have the advantage of shorter handling time, but this advantage is diminishing as manuscript novelty increases. In addition, we found longer handling time is negatively related to the impact of manuscripts, while the relationships between novelty and 3- and 5-year citations are U-shape. This study expands the existing knowledge of the novelty bias by examining its existence in the prepublication process of manuscripts.
-
Stern, B.T.: ¬The new ADONIS (1992)
0.08
0.07919758 = product of:
0.3167903 = sum of:
0.3167903 = weight(_text_:handling in 3743) [ClassicSimilarity], result of:
0.3167903 = score(doc=3743,freq=4.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.78401524 = fieldWeight in 3743, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.0625 = fieldNorm(doc=3743)
0.25 = coord(1/4)
- Abstract
- Reports on the 2 year trail period of the document delivery system ADONIS, made for the pharmaceutical industry. A market survey reports the needs of the pharmaceutical industry for such a product. Its success as a CD-ROM product depends on rapid conversion from paper in less than 3 weeks and special compression techniques to limit the number of CD-ROMs produced. Discusses handling of source material, the production software, errata handling and the hardware. Considers current developments, the benefits of using ADONIS generally and those for the publishers
-
Wilson, T.D.: Redesigning the university library in the digital age (1998)
0.08
0.07919758 = product of:
0.3167903 = sum of:
0.3167903 = weight(_text_:handling in 1494) [ClassicSimilarity], result of:
0.3167903 = score(doc=1494,freq=4.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.78401524 = fieldWeight in 1494, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.0625 = fieldNorm(doc=1494)
0.25 = coord(1/4)
- Abstract
- Business process re-engineering (or redesign) has achieved mixed results in business and industry but it offers an approach to thinking about the future of academic libraries in the digital age that is worth considering. This paper outlines the forces that are currently affecting academic libraries in the UK and proposes a strategy whereby the transformation from the handling of artefacts to the handling of electronic sources may be effected with maximum benefit to the information user.
-
Paris, C.G.: Chemical structure handling by computer (1997)
0.08
0.07919758 = product of:
0.3167903 = sum of:
0.3167903 = weight(_text_:handling in 3254) [ClassicSimilarity], result of:
0.3167903 = score(doc=3254,freq=4.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.78401524 = fieldWeight in 3254, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.0625 = fieldNorm(doc=3254)
0.25 = coord(1/4)
- Abstract
- State of the art review of computerized chemical structure handling and the way in which the need for representation of chemical structures and structure diagrams, by computer software, has created a sub domain of information retrieval that integrates the requirements of research chemists for graph-theoretic algorithms with the database designs of computer science. Identifies and discusses the current research topics and selected portions of the literature, particularly during the period of its most rapid expansion between 1989 and 1996
-
Dalmau, M.; Floyd, R.; Jiao, D.; Riley, J.: Integrating thesaurus relationships into search and browse in an online photograph collection (2005)
0.08
0.07875453 = product of:
0.15750906 = sum of:
0.017506186 = weight(_text_:und in 3583) [ClassicSimilarity], result of:
0.017506186 = score(doc=3583,freq=2.0), product of:
0.14288108 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06442181 = queryNorm
0.12252277 = fieldWeight in 3583, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0390625 = fieldNorm(doc=3583)
0.14000288 = weight(_text_:handling in 3583) [ClassicSimilarity], result of:
0.14000288 = score(doc=3583,freq=2.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.34648907 = fieldWeight in 3583, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.0390625 = fieldNorm(doc=3583)
0.5 = coord(2/4)
- Abstract
- Purpose - Seeks to share with digital library practitioners the development process of an online image collection that integrates the syndetic structure of a controlled vocabulary to improve end-user search and browse functionality. Design/methodology/approach - Surveys controlled vocabulary structures and their utility for catalogers and end-users. Reviews research literature and usability findings that informed the specifications for integration of the controlled vocabulary structure into search and browse functionality. Discusses database functions facilitating query expansion using a controlled vocabulary structure, and web application handling of user queries and results display. Concludes with a discussion of open-source alternatives and reuse of database and application components in other environments. Findings - Affirms that structured forms of browse and search can be successfully integrated into digital collections to significantly improve the user's discovery experience. Establishes ways in which the technologies used in implementing enhanced search and browse functionality can be abstracted to work in other digital collection environments. Originality/value - Significant amounts of research on integrating thesauri structures into search and browse functionalities exist, but examples of online resources that have implemented this approach are few in comparison. The online image collection surveyed in this paper can serve as a model to other designers of digital library resources for integrating controlled vocabularies and metadata structures into more dynamic search and browse functionality for end-users.
- Theme
- Konzeption und Anwendung des Prinzips Thesaurus
-
Pal, S.; Mitra, M.; Kamps, J.: Evaluation effort, reliability and reusability in XML retrieval (2011)
0.08
0.07520627 = product of:
0.3008251 = sum of:
0.3008251 = weight(_text_:judge in 197) [ClassicSimilarity], result of:
0.3008251 = score(doc=197,freq=4.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.6039981 = fieldWeight in 197, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=197)
0.25 = coord(1/4)
- Abstract
- The Initiative for the Evaluation of XML retrieval (INEX) provides a TREC-like platform for evaluating content-oriented XML retrieval systems. Since 2007, INEX has been using a set of precision-recall based metrics for its ad hoc tasks. The authors investigate the reliability and robustness of these focused retrieval measures, and of the INEX pooling method. They explore four specific questions: How reliable are the metrics when assessments are incomplete, or when query sets are small? What is the minimum pool/query-set size that can be used to reliably evaluate systems? Can the INEX collections be used to fairly evaluate "new" systems that did not participate in the pooling process? And, for a fixed amount of assessment effort, would this effort be better spent in thoroughly judging a few queries, or in judging many queries relatively superficially? The authors' findings validate properties of precision-recall-based metrics observed in document retrieval settings. Early precision measures are found to be more error-prone and less stable under incomplete judgments and small topic-set sizes. They also find that system rankings remain largely unaffected even when assessment effort is substantially (but systematically) reduced, and confirm that the INEX collections remain usable when evaluating nonparticipating systems. Finally, they observe that for a fixed amount of effort, judging shallow pools for many queries is better than judging deep pools for a smaller set of queries. However, when judging only a random sample of a pool, it is better to completely judge fewer topics than to partially judge many topics. This result confirms the effectiveness of pooling methods.
-
Borko, H.; Chatman, S.: Criteria for acceptable abstracts : a survey of abstractors' instructions (1963)
0.07
0.07445041 = product of:
0.29780164 = sum of:
0.29780164 = weight(_text_:judge in 686) [ClassicSimilarity], result of:
0.29780164 = score(doc=686,freq=2.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.59792763 = fieldWeight in 686, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0546875 = fieldNorm(doc=686)
0.25 = coord(1/4)
- Abstract
- The need for criteria by which to judge the adequacy of an abstract is felt most strongly when evaluating machine-produced abstracts. In order to develop a set of criteria, a survey was conducted of the instructions prepared by various scientific publications as a guide to their abstracters in the preparation of copy. One-hundred-and-thirty sets of instructions were analyzed and compared as to their function, content, and form. It was concluded that, while differences in subject matter do not necessarily require different kinds of abstracts, there are significant variations between the informative and the indicative abstract. A set of criteria for the writing of an acceptable abstract of science literature was derived. The adequacy of these criteria is still to be validated, and the athors' plans for fututre research in this area are specified
-
Janes, J.W.: ¬The binary nature of continous relevance judgements : a study of users' perceptions (1991)
0.07
0.07445041 = product of:
0.29780164 = sum of:
0.29780164 = weight(_text_:judge in 4844) [ClassicSimilarity], result of:
0.29780164 = score(doc=4844,freq=2.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.59792763 = fieldWeight in 4844, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0546875 = fieldNorm(doc=4844)
0.25 = coord(1/4)
- Abstract
- Replicates a previous study by Eisenberg and Hu regarding users' perceptions of the binary or dichotomous nature of their relevance judgements. The studies examined the assumptions that searchers divide documents evenly into relevant and nonrelevant. 35 staff, faculty and doctoral students at Michigan Univ., School of Education and Dept. of Psychology conducted searchers and the retrieved documents submitted to the searchers in 3 incremental versions: title only; title and abstract; title, abstract and indexing information: At each stage the subjects were asked to judge the relevance of the document to the query. The findings support the earlier study and the break points between relevance and nonrelevance was not at or near 50%
-
Wilbur, W.J.; Coffee, L.: ¬The effectiveness of document neighboring in search enhancement (1994)
0.07
0.07445041 = product of:
0.29780164 = sum of:
0.29780164 = weight(_text_:judge in 7418) [ClassicSimilarity], result of:
0.29780164 = score(doc=7418,freq=2.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.59792763 = fieldWeight in 7418, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0546875 = fieldNorm(doc=7418)
0.25 = coord(1/4)
- Abstract
- Considers two kinds of queries that may be applied to a database. The first is a query written by a searcher to express an information need. The second is a request for documents most similar to a document already judge relevant by the searcher. Examines the effectiveness of these two procedures and shows that in important cases the latter query types is more effective than the former. This provides a new view of the cluster hypothesis and a justification for document neighbouring procedures. If all the documents in a database have readily available precomputed nearest neighbours, a new search algorithm, called parallel neighbourhood searching. Shows that this feedback-based method provides significant improvement in recall over traditional linear searching methods, and appears superior to traditional feedback methods in overall performance
-
Armstrong, C.J.: Do we really care about quality? (1995)
0.07
0.07445041 = product of:
0.29780164 = sum of:
0.29780164 = weight(_text_:judge in 3946) [ClassicSimilarity], result of:
0.29780164 = score(doc=3946,freq=2.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.59792763 = fieldWeight in 3946, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0546875 = fieldNorm(doc=3946)
0.25 = coord(1/4)
- Abstract
- With the increased use of local area networks, CD-ROMs and the Internet, an enormous amount of traditional material is becoming available. Quality issues are therefore becoming even more vital. Describes a methodology being evaluated by The Centre for Information Quality (CIQM) whereby databases can be quantitatively labelled by their producers, so that users can judge how much reliance can be place on them. At the same time, each label bacomes a database specific standard to which its information provider must adhere. This may be a route to responsible information supply
-
Armstrong, C.J.; Wheatley, A.: Writing abstracts for online databases : results of database producers' guidelines (1998)
0.07
0.07445041 = product of:
0.29780164 = sum of:
0.29780164 = weight(_text_:judge in 4295) [ClassicSimilarity], result of:
0.29780164 = score(doc=4295,freq=2.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.59792763 = fieldWeight in 4295, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0546875 = fieldNorm(doc=4295)
0.25 = coord(1/4)
- Abstract
- Reports on one area of research in an Electronic Libraries Programme (eLib) MODELS (MOving to Distributed Environments for Library Services) supporting study in 3 investigative areas: examination of current database producers' guidelines for their abstract writers; a brief survey of abstracts in some traditional online databases; and a detailed survey of abstracts from 3 types of electronic database (print sourced online databases, Internet subject trees or directories, and Internet gateways). Examination of database producers' guidelines, reported here, gave a clear view of the intentions behind professionally produced traditional (printed index based) database abstracts and provided a benchmark against which to judge the conclusions of the larger investigations into abstract style, readability and content
-
Chen, K.-H.: Evaluating Chinese text retrieval with multilingual queries (2002)
0.07
0.07445041 = product of:
0.29780164 = sum of:
0.29780164 = weight(_text_:judge in 2851) [ClassicSimilarity], result of:
0.29780164 = score(doc=2851,freq=2.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.59792763 = fieldWeight in 2851, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0546875 = fieldNorm(doc=2851)
0.25 = coord(1/4)
- Abstract
- This paper reports the design of a Chinese test collection with multilingual queries and the application of this test collection to evaluate information retrieval Systems. The effective indexing units, IR models, translation techniques, and query expansion for Chinese text retrieval are identified. The collaboration of East Asian countries for construction of test collections for cross-language multilingual text retrieval is also discussed in this paper. As well, a tool is designed to help assessors judge relevante and gather the events of relevante judgment. The log file created by this tool will be used to analyze the behaviors of assessors in the future.
-
Seadle, M.: Project ethnography : an anthropological approach to assessing digital library services (2000)
0.07
0.07445041 = product of:
0.29780164 = sum of:
0.29780164 = weight(_text_:judge in 2162) [ClassicSimilarity], result of:
0.29780164 = score(doc=2162,freq=2.0), product of:
0.49805635 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06442181 = queryNorm
0.59792763 = fieldWeight in 2162, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0546875 = fieldNorm(doc=2162)
0.25 = coord(1/4)
- Abstract
- OFTEN LIBRARIES TRY TO ASSESS DIGITAL LIBRARY SERVICE for their user populations in comprehensive terms that judge its overall success or failure. This article's key assumption is that the people involved must be understood before services can be assessed, especially if evaluators and developers intend to improve a digital library product. Its argument is simply that anthropology can provide the initial understanding, the intellectual basis, on which informed choices about sample population, survey design, or focus group selection can reasonably be made. As an example, this article analyzes the National Gallery of the Spoken Word (NGSW). It includes brief descriptions of nine NGSW micro-cultures and three pairs of dichotomies within these micro-cultures.
-
Gillman, P.: Data handling and text compression (1992)
0.07
0.072747625 = product of:
0.2909905 = sum of:
0.2909905 = weight(_text_:handling in 5305) [ClassicSimilarity], result of:
0.2909905 = score(doc=5305,freq=6.0), product of:
0.40406144 = queryWeight, product of:
6.272122 = idf(docFreq=227, maxDocs=44421)
0.06442181 = queryNorm
0.720164 = fieldWeight in 5305, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
6.272122 = idf(docFreq=227, maxDocs=44421)
0.046875 = fieldNorm(doc=5305)
0.25 = coord(1/4)
- Abstract
- Data compression has a function in text storage and data handling, but not at the level of compressing data files. The reason is that the decompression of such files add a time delay to the retrieval process, and users can see this delay as a drawback of the system concerned. Compression techniques can with benefit be applied to index files. A more relevant data handling problem is that posed by the need, in most systems, to store two versions of imported text. The first id the 'native' version, as it might have come from a word processor or text editor. The second is the ASCII version which is what is actually imported. Inverted file indexes form yet another version. The problem arises out of the need for dynamic indexing and re-indexing of revisable documents in very large database applications such as are found in Office Automation systems. Four mainstream text-management packages are used to show how this problem is handled, and how generic document architectures such as OCA/CDA and SGML might help