-
Popper, A.: Daten eingewickelt : Dynamische Webseiten mit XML und SQL (2001)
0.20
0.20002043 = product of:
0.40004086 = sum of:
0.36399105 = weight(_text_:java in 6804) [ClassicSimilarity], result of:
0.36399105 = score(doc=6804,freq=2.0), product of:
0.4674661 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0663307 = queryNorm
0.77864695 = fieldWeight in 6804, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.078125 = fieldNorm(doc=6804)
0.036049824 = weight(_text_:und in 6804) [ClassicSimilarity], result of:
0.036049824 = score(doc=6804,freq=2.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.24504554 = fieldWeight in 6804, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.078125 = fieldNorm(doc=6804)
0.5 = coord(2/4)
- Abstract
- Wer dynamische Webseiten mit XML erzeugt, wird über kurz oder lang auch Datenbanken anzapfen wollen. Das Apache-Projekt Cocoon liefert eine komfortable Java/Servlet-Produktionsumgebung dafür
-
Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001)
0.11
0.11032892 = product of:
0.44131568 = sum of:
0.44131568 = weight(_text_:java in 6896) [ClassicSimilarity], result of:
0.44131568 = score(doc=6896,freq=6.0), product of:
0.4674661 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0663307 = queryNorm
0.94405925 = fieldWeight in 6896, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=6896)
0.25 = coord(1/4)
- Abstract
- This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
-
Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998)
0.06
0.063698426 = product of:
0.2547937 = sum of:
0.2547937 = weight(_text_:java in 4287) [ClassicSimilarity], result of:
0.2547937 = score(doc=4287,freq=2.0), product of:
0.4674661 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0663307 = queryNorm
0.5450528 = fieldWeight in 4287, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=4287)
0.25 = coord(1/4)
- Abstract
- Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet
-
Gracy, K.F.: Enriching and enhancing moving images with Linked Data : an exploration in the alignment of metadata models (2018)
0.04
0.037077025 = product of:
0.1483081 = sum of:
0.1483081 = weight(_text_:having in 200) [ClassicSimilarity], result of:
0.1483081 = score(doc=200,freq=4.0), product of:
0.39673427 = queryWeight, product of:
5.981156 = idf(docFreq=304, maxDocs=44421)
0.0663307 = queryNorm
0.37382224 = fieldWeight in 200, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
5.981156 = idf(docFreq=304, maxDocs=44421)
0.03125 = fieldNorm(doc=200)
0.25 = coord(1/4)
- Abstract
- The purpose of this paper is to examine the current state of Linked Data (LD) in archival moving image description, and propose ways in which current metadata records can be enriched and enhanced by interlinking such metadata with relevant information found in other data sets. Design/methodology/approach Several possible metadata models for moving image production and archiving are considered, including models from records management, digital curation, and the recent BIBFRAME AV Modeling Study. This research also explores how mappings between archival moving image records and relevant external data sources might be drawn, and what gaps exist between current vocabularies and what is needed to record and make accessible the full lifecycle of archiving through production, use, and reuse. Findings The author notes several major impediments to implementation of LD for archival moving images. The various pieces of information about creators, places, and events found in moving image records are not easily connected to relevant information in other sources because they are often not semantically defined within the record and can be hidden in unstructured fields. Libraries, archives, and museums must work on aligning the various vocabularies and schemas of potential value for archival moving image description to enable interlinking between vocabularies currently in use and those which are used by external data sets. Alignment of vocabularies is often complicated by mismatches in granularity between vocabularies. Research limitations/implications The focus is on how these models inform functional requirements for access and other archival activities, and how the field might benefit from having a common metadata model for critical archival descriptive activities. Practical implications By having a shared model, archivists may more easily align current vocabularies and develop new vocabularies and schemas to address the needs of moving image data creators and scholars. Originality/value Moving image archives, like other cultural institutions with significant heritage holdings, can benefit tremendously from investing in the semantic definition of information found in their information databases. While commercial entities such as search engines and data providers have already embraced the opportunities that semantic search provides for resource discovery, most non-commercial entities are just beginning to do so. Thus, this research addresses the benefits and challenges of enriching and enhancing archival moving image records with semantically defined information via LD.
-
Weibel, S.L.: Border crossings : reflections on a decade of metadata consensus building (2005)
0.03
0.03277177 = product of:
0.13108708 = sum of:
0.13108708 = weight(_text_:having in 2187) [ClassicSimilarity], result of:
0.13108708 = score(doc=2187,freq=2.0), product of:
0.39673427 = queryWeight, product of:
5.981156 = idf(docFreq=304, maxDocs=44421)
0.0663307 = queryNorm
0.3304153 = fieldWeight in 2187, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.981156 = idf(docFreq=304, maxDocs=44421)
0.0390625 = fieldNorm(doc=2187)
0.25 = coord(1/4)
- Abstract
- In June of this year, I performed my final official duties as part of the Dublin Core Metadata Initiative management team. It is a happy irony to affix a seal on that service in this journal, as both D-Lib Magazine and the Dublin Core celebrate their tenth anniversaries. This essay is a personal reflection on some of the achievements and lessons of that decade. The OCLC-NCSA Metadata Workshop took place in March of 1995, and as we tried to understand what it meant and who would care, D-Lib magazine came into being and offered a natural venue for sharing our work. I recall a certain skepticism when Bill Arms said "We want D-Lib to be the first place people look for the latest developments in digital library research." These were the early days in the evolution of electronic publishing, and the goal was ambitious. By any measure, a decade of high-quality electronic publishing is an auspicious accomplishment, and D-Lib (and its host, CNRI) deserve congratulations for having achieved their goal. I am grateful to have been a contributor. That first DC workshop led to further workshops, a community, a variety of standards in several countries, an ISO standard, a conference series, and an international consortium. Looking back on this evolution is both satisfying and wistful. While I am pleased that the achievements are substantial, the unmet challenges also provide a rich till in which to cultivate insights on the development of digital infrastructure.
-
Baker, T.; Dekkers, M.: Identifying metadata elements with URIs : The CORES resolution (2003)
0.03
0.026217414 = product of:
0.104869656 = sum of:
0.104869656 = weight(_text_:having in 2199) [ClassicSimilarity], result of:
0.104869656 = score(doc=2199,freq=2.0), product of:
0.39673427 = queryWeight, product of:
5.981156 = idf(docFreq=304, maxDocs=44421)
0.0663307 = queryNorm
0.26433223 = fieldWeight in 2199, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.981156 = idf(docFreq=304, maxDocs=44421)
0.03125 = fieldNorm(doc=2199)
0.25 = coord(1/4)
- Abstract
- On 18 November 2002, at a meeting organised by the CORES Project (Information Society Technologies Programme, European Union), several organisations regarded as maintenance authorities for metadata elements achieved consensus on a resolution to assign Uniform Resource Identifiers (URIs) to metadata elements as a useful first step towards the development of mapping infrastructures and interoperability services. The signatories of the CORES Resolution agreed to promote this consensus in their communities and beyond and to implement an action plan in the following six months. Six months having passed, the maintainers of GILS, ONIX, MARC 21, CERIF, DOI, IEEE/LOM, and Dublin Core report on their implementations of the resolution and highlight issues of relevance to establishing good-practice conventions for declaring, identifying, and maintaining metadata elements more generally. In June 2003, the resolution was also endorsed by the maintainers of UNIMARC. The "Resolution on Metadata Element Identifiers", or CORES Resolution, is an agreement among the maintenance organisations for several major metadata standards - GILS, ONIX, MARC 21, UNIMARC, CERIF, DOI®, IEEE/LOM, and Dublin Core - to identify their metadata elements using Uniform Resource Identifiers (URIs). The Uniform Resource Identifier, defined in the IETF RFC 2396 as "a compact string of characters for identifying an abstract or physical resource", has been promoted for use as a universal form of identification by the World Wide Web Consortium. The CORES Resolution, formulated at a meeting organised by the European project CORES in November 2002, included a commitment to publicise the consensus statement to a wider audience of metadata standards initiatives and to implement key points of the agreement within the following six months - specifically, to define URI assignment mechanisms, assign URIs to elements, and formulate policies for the persistence of those URIs. This article marks the passage of six months by reporting on progress made in implementing this common action plan. After presenting the text of the CORES Resolution and its three "clarifications", the article summarises the position of each signatory organisation towards assigning URIs to its metadata elements, noting any practical or strategic problems that may have emerged. These progress reports were based on input from Thomas Baker, José Borbinha, Eliot Christian, Erik Duval, Keith Jeffery, Rebecca Guenther, and Norman Paskin. The article closes with a few general observations about these first steps towards the clarification of shared conventions for the identification of metadata elements and perhaps, one can hope, towards the ultimate goal of improving interoperability among a diversity of metadata communities.
-
Pole, T.: Contextual classification in the Metadata Object Manager (M.O.M.) (1999)
0.02
0.022940237 = product of:
0.09176095 = sum of:
0.09176095 = weight(_text_:having in 672) [ClassicSimilarity], result of:
0.09176095 = score(doc=672,freq=2.0), product of:
0.39673427 = queryWeight, product of:
5.981156 = idf(docFreq=304, maxDocs=44421)
0.0663307 = queryNorm
0.2312907 = fieldWeight in 672, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.981156 = idf(docFreq=304, maxDocs=44421)
0.02734375 = fieldNorm(doc=672)
0.25 = coord(1/4)
- Abstract
- To Classify is (according to Webster's) "to distribute into classes; to arrange according to a system; to arrange in sets according to some method founded on common properties or characters." A model of classification is a type or category or (excuse the recursive definition) a class of classification "system" as mentioned in Webster's definition. One employs a classification model to implement a specific classification system. (E.g. we employ the hierarchical classification model to implement the Dewey Decimal System) An effective classification model must represent both the commonality (Webster's "common properties"), and also the differences among the items being classified. The commonality of each category or class defines a test to determine which items belong to the set that class represents. The relationships among the classes define the variability among the sets that the classification model can represent. Therefore, a classification model is more than an enumeration or other simple listing of the names of its classes. Our purpose in employing classification models is to build metadata systems that represent and manage knowledge, so that users of these systems we build can: quickly and accurately define (the commonality of) what knowledge they require, allowing the user great flexibility in how that desire is described; be presented existing information assets that best match the stated requirements; distinguish (the variability) among the candidates to determine their best choice(s), without actually having to examine the individual items themselves; retrieve the knowledge they need The MetaData model we present is Contextual Classification. It is a synthesis of several traditional metadata models, including controlled keyword indices, hierarchical classification, attribute value systems, Faceted Classification, and Evolutionary Faceted Classification. Research into building on line library systems of software and software documentation (Frakes and Pole, 19921 and Pole 19962) has shown the need and viability of combining the strengths, and minimizing the weaknesses of multiple metadata models in the development of information systems. The MetaData Object Manager (M.O.M.), a MetaData Warehouse (MDW) and editorial work flow system developed for the Thomson Financial Publishing Group, builds on this earlier research. From controlled keyword systems we borrow the idea of representing commonalties by defining formally defined subject areas or categories of information, which sets are represented by these categories names. From hierarchical classification, we borrow the concept of relating these categories and classes to each other to represent the variability in a collection of information sources. From attribute value we borrow the concept that each information source can be described in different ways, each in respect to the attribute of the information being described. From Faceted Classification we borrow the concept of relating the classes themselves into sets of classes, which a faceted classification system would describe as facets of terms. In this paper we will define the Contextual Classification model, comparing it to the traditional metadata models from which it has evolved. Using the MOM as an example, we will then discuss both the use of Contextual Classification is developing this system, and the organizational, performance and reliability
-
Heery, R.; Wagner, H.: ¬A metadata registry for the Semantic Web (2002)
0.02
0.022940237 = product of:
0.09176095 = sum of:
0.09176095 = weight(_text_:having in 2210) [ClassicSimilarity], result of:
0.09176095 = score(doc=2210,freq=2.0), product of:
0.39673427 = queryWeight, product of:
5.981156 = idf(docFreq=304, maxDocs=44421)
0.0663307 = queryNorm
0.2312907 = fieldWeight in 2210, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.981156 = idf(docFreq=304, maxDocs=44421)
0.02734375 = fieldNorm(doc=2210)
0.25 = coord(1/4)
- Abstract
- * Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179 (This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.); * The xml.org directory of the Extended Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); * The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; * The Semantic Web Agreement Group Dictionary, a database of terms for the Semantic Web that can be referred to by humans and software agents; * LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; * The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives. Metadata registries essentially provide an index of terms. Given the distributed nature of the Web, there are a number of ways this can be accomplished. For example, the registry could link to terms and definitions in schemas published by implementers and stored locally by the schema maintainer. Alternatively, the registry might harvest various metadata schemas from their maintainers. Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. An important benefit of this approach is an increase in the reuse of existing terms, rather than users having to reinvent them. Merging schemas to one view leads to harmonization between applications and helps avoid duplication of effort. Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process.
-
Baker, T.: Languages for Dublin Core (1998)
0.02
0.022940237 = product of:
0.09176095 = sum of:
0.09176095 = weight(_text_:having in 2257) [ClassicSimilarity], result of:
0.09176095 = score(doc=2257,freq=2.0), product of:
0.39673427 = queryWeight, product of:
5.981156 = idf(docFreq=304, maxDocs=44421)
0.0663307 = queryNorm
0.2312907 = fieldWeight in 2257, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.981156 = idf(docFreq=304, maxDocs=44421)
0.02734375 = fieldNorm(doc=2257)
0.25 = coord(1/4)
- Abstract
- Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
-
Ecker, R.: ¬Das digitale Buch im Internet : Methoden der Erfassung, Aufbereitung und Bereitstellung (1998)
0.02
0.021629894 = product of:
0.08651958 = sum of:
0.08651958 = weight(_text_:und in 2511) [ClassicSimilarity], result of:
0.08651958 = score(doc=2511,freq=18.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.58810925 = fieldWeight in 2511, product of:
4.2426405 = tf(freq=18.0), with freq of:
18.0 = termFreq=18.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=2511)
0.25 = coord(1/4)
- Abstract
- Die elektronische Erfassung aller bedeutenden gedruckten Informationen und ihre Verbreitung über die weltweiten Datennetze in digitalisierter Form oder als Volltext stellt eine der zur Zeit größten Herausforderungen und Chancen für unsere Informationsgesellschaft dar. Der nachstehende Artikel widmet sich der oft gestellten Frage nach den technischen Methoden für die elektronische Erfassung, Erschließung und Bereitstellung gedruckter Vorlagen und beschreibt die wichtigsten Schritte der Digitalisierung und Datenaufbereitung und ihrer technischen und organisatorischen Parameter
- Content
- Beschreibt den Prozeß der digitalisierten Aufbereitung von Text und Bild etc. über Scannen
-
Holzhause, R.; Krömker, H.; Schnöll, M.: Vernetzung von audiovisuellen Inhalten und Metadaten : Metadatengestütztes System zur Generierung und Erschließung von Medienfragmenten (Teil 1) (2016)
0.02
0.020923655 = product of:
0.08369462 = sum of:
0.08369462 = weight(_text_:und in 636) [ClassicSimilarity], result of:
0.08369462 = score(doc=636,freq=22.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.5689069 = fieldWeight in 636, product of:
4.690416 = tf(freq=22.0), with freq of:
22.0 = termFreq=22.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=636)
0.25 = coord(1/4)
- Abstract
- Der folgende Artikel beschäftigt sich mit den Anforderungen und Aufgaben eines Systems zur vernetzten Datenverwaltung, welches zeitbezogene Verknüpfungen zwischen audiovisuellen Inhalten und Metadaten ermöglicht. Anhand der zusammenhängenden Relationen kann ein audiovisuelles Medium nicht nur als Ganzes effektiv beschrieben und erfasst werden, sondern auch dessen Fragmente und Kontexte. Auf Basis dieser Datenverarbeitung lassen sich vielfältige Schnittstellen und Anwendungen zur kontextbasierten Erschließung, Bearbeitung und Auslieferung von Dokumenten und Medien abbilden, welche insbesondere für Mediatheken und Systeme des Media-Asset-Managements im medialen Umfeld einen großen zusätzlichen Nutzen aufweisen, aber auch Aufgaben innerhalb wissenschaftlicher Bibliotheken und Archivsystemen erfüllen können.
-
Christof, J.: Metadata sharing : Die Verbunddatenbank Internetquellen der Virtuellen Fachbibliothek Politikwissenschaft und der Virtuellen Fachbibliothek Wirtschaftswissenschaften (2003)
0.02
0.018926159 = product of:
0.075704634 = sum of:
0.075704634 = weight(_text_:und in 2916) [ClassicSimilarity], result of:
0.075704634 = score(doc=2916,freq=18.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.5145956 = fieldWeight in 2916, product of:
4.2426405 = tf(freq=18.0), with freq of:
18.0 = termFreq=18.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=2916)
0.25 = coord(1/4)
- Abstract
- Im Kontext der durch die Deutsche Forschungsgemeinschaft (DFG) geförderten Projekte "Virtuelle Fachbibliothek Politikwissenschaft" und "Virtuelle Fachbibliothek Wirtschaftswissenschaften" wird für den Nachweis von Onlinequellen jeweils ein Fachinformationsführer aufgebaut. Die verantwortlichen Institutionen, die Staatsund Universitätsbibliothek Hamburg (SUB Hamburg), die Universitäts- und Stadtbibliothek Köln (USB Köln) und die Deutsche Zentralbibliothek für Wirtschaftswissenschaften (ZBW Kiel) haben dazu ein Metadatenkonzept in Abstimmung mit nationalen und internationalen Entwicklungen auf Basis von Dublin Core entwickelt und dieses Konzept beim Aufbau der Verbunddatenbank Internetquellen umgesetzt.
- Source
- Bibliotheken und Informationseinrichtungen - Aufgaben, Strukturen, Ziele: 29. Arbeits- und Fortbildungstagung der ASpB / Sektion 5 im DBV in Zusammenarbeit mit der BDB, BIB, DBV, DGI und VDB, zugleich DBV-Jahrestagung, 8.-11.4.2003 in Stuttgart. Red.: Margit Bauer
-
Hengel-Dittrich, C.: Metadaten und Persistent Identifier : ein Informationstag für Verlage an Der Deutschen Bibliothek (1999)
0.02
0.018732041 = product of:
0.074928164 = sum of:
0.074928164 = weight(_text_:und in 5222) [ClassicSimilarity], result of:
0.074928164 = score(doc=5222,freq=6.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.50931764 = fieldWeight in 5222, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.09375 = fieldNorm(doc=5222)
0.25 = coord(1/4)
- Footnote
- Bericht über eine Veranstaltung am 9.9.1999 der DDB und der Buchhändlervereinigung
- Source
- Zeitschrift für Bibliothekswesen und Bibliographie. 46(1999) H.6, S.548-553
-
Qualität in der Inhaltserschließung (2021)
0.02
0.018732037 = product of:
0.07492815 = sum of:
0.07492815 = weight(_text_:und in 1754) [ClassicSimilarity], result of:
0.07492815 = score(doc=1754,freq=54.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.5093176 = fieldWeight in 1754, product of:
7.3484693 = tf(freq=54.0), with freq of:
54.0 = termFreq=54.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.03125 = fieldNorm(doc=1754)
0.25 = coord(1/4)
- Abstract
- Der 70. Band der BIPRA-Reihe beschäftigt sich mit der Qualität in der Inhaltserschließung im Kontext etablierter Verfahren und technologischer Innovationen. Treffen heterogene Erzeugnisse unterschiedlicher Methoden und Systeme aufeinander, müssen minimale Anforderungen an die Qualität der Inhaltserschließung festgelegt werden. Die Qualitätsfrage wird zurzeit in verschiedenen Zusammenhängen intensiv diskutiert und im vorliegenden Band aufgegriffen. In diesem Themenfeld aktive Autor:innen beschreiben aus ihrem jeweiligen Blickwinkel unterschiedliche Aspekte zu Metadaten, Normdaten, Formaten, Erschließungsverfahren und Erschließungspolitik. Der Band versteht sich als Handreichung und Anregung für die Diskussion um die Qualität in der Inhaltserschließung.
- Content
- Inhalt: Editorial - Michael Franke-Maier, Anna Kasprzik, Andreas Ledl und Hans Schürmann Qualität in der Inhaltserschließung - Ein Überblick aus 50 Jahren (1970-2020) - Andreas Ledl Fit for Purpose - Standardisierung von inhaltserschließenden Informationen durch Richtlinien für Metadaten - Joachim Laczny Neue Wege und Qualitäten - Die Inhaltserschließungspolitik der Deutschen Nationalbibliothek - Ulrike Junger und Frank Scholze Wissensbasen für die automatische Erschließung und ihre Qualität am Beispiel von Wikidata - Lydia Pintscher, Peter Bourgonje, Julián Moreno Schneider, Malte Ostendorff und Georg Rehm Qualitätssicherung in der GND - Esther Scheven Qualitätskriterien und Qualitätssicherung in der inhaltlichen Erschließung - Thesenpapier des Expertenteams RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI) Coli-conc - Eine Infrastruktur zur Nutzung und Erstellung von Konkordanzen - Uma Balakrishnan, Stefan Peters und Jakob Voß Methoden und Metriken zur Messung von OCR-Qualität für die Kuratierung von Daten und Metadaten - Clemens Neudecker, Karolina Zaczynska, Konstantin Baierer, Georg Rehm, Mike Gerber und Julián Moreno Schneider Datenqualität als Grundlage qualitativer Inhaltserschließung - Jakob Voß Bemerkungen zu der Qualitätsbewertung von MARC-21-Datensätzen - Rudolf Ungváry und Péter Király Named Entity Linking mit Wikidata und GND - Das Potenzial handkuratierter und strukturierter Datenquellen für die semantische Anreicherung von Volltexten - Sina Menzel, Hannes Schnaitter, Josefine Zinck, Vivien Petras, Clemens Neudecker, Kai Labusch, Elena Leitner und Georg Rehm Ein Protokoll für den Datenabgleich im Web am Beispiel von OpenRefine und der Gemeinsamen Normdatei (GND) - Fabian Steeg und Adrian Pohl Verbale Erschließung in Katalogen und Discovery-Systemen - Überlegungen zur Qualität - Heidrun Wiesenmüller Inhaltserschließung für Discovery-Systeme gestalten - Jan Frederik Maas Evaluierung von Verschlagwortung im Kontext des Information Retrievals - Christian Wartena und Koraljka Golub Die Qualität der Fremddatenanreicherung FRED - Cyrus Beck Quantität als Qualität - Was die Verbünde zur Verbesserung der Inhaltserschließung beitragen können - Rita Albrecht, Barbara Block, Mathias Kratzer und Peter Thiessen Hybride Künstliche Intelligenz in der automatisierten Inhaltserschließung - Harald Sack
- Footnote
- Vgl.: https://www.degruyter.com/document/doi/10.1515/9783110691597/html. DOI: https://doi.org/10.1515/9783110691597. Rez. in: Information - Wissenschaft und Praxis 73(2022) H.2-3, S.131-132 (B. Lorenz u. V. Steyer). Weitere Rezension in: o-bib 9(20229 Nr.3. (Martin Völkl) [https://www.o-bib.de/bib/article/view/5843/8714].
- Series
- Bibliotheks- und Informationspraxis; 70
-
Janssen, U.: ONIX - Metadaten in Verlagen und Buchhandel (2003)
0.02
0.018579654 = product of:
0.07431862 = sum of:
0.07431862 = weight(_text_:und in 2764) [ClassicSimilarity], result of:
0.07431862 = score(doc=2764,freq=34.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.50517434 = fieldWeight in 2764, product of:
5.8309517 = tf(freq=34.0), with freq of:
34.0 = termFreq=34.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0390625 = fieldNorm(doc=2764)
0.25 = coord(1/4)
- Abstract
- ONIX (das Akronym steht für Online information exchange) ist der erste weltweit akzeptierte Standard für angereicherte Metadaten über Bücher, Fortsetzungswerke und andere Produkte des Buchhandels. Er wurde ursprünglich in den USA entwickelt, zunächst mit dem Ziel, dem Internetbuchhandel in einem einheitlichen Format Katalogdaten und zusätzlich Marketingmaterial zur Verfügung stellen zu können. Von Anfang an waren Barsortimente, bibliographische Agenturen und Verlage in den USA und bald auch aus dem Vereinigten Königreich an der Entwicklung von ONIX beteiligt und haben diese finanziell sowie personell gefördert. Die Pflege und Weiterentwicklung dieses neuen Standards wurde dann in die Hände von EDItEUR gelegt, der internationalen Dachorganisation für Standardisierung im Buchhandel, gegründet und gefördert von Verbänden aus Buchhandel, Verlagen und Bibliothekswesen, darunter dem Börsenverein des Deutschen Buchhandels und der European Booksellers Federation. Büro und Sekretariat von EDItEUR werden in London von Book Industry Communication (BIC), einer Gemeinschaftsorganisation der britischen Verleger- und Buchhändlerverbände, gestellt. EDIMUR wurde vor zehn Jahren gegründet, um EDIStandards (EDI = electronic data interchange, elektronischer Datenaustausch) für die Kommunikation zwischen Buchhandel und Verlagen einerseits und Bibliotheken und ihren Lieferanten andererseits zu entwickeln. Dafür wurden Richtlinien für eine Reihe von EANCOM-Nachrichten verabschiedet, darunter für Bestellungen, Auftragsbestätigungen, Rechnungen und Angebote. Ein Richtlinienentwurf für die Nachricht PRICAT (Price and sales catalogue), die für die Übermittlung von Katalogdaten bestimmt ist, wurde zwar vor einigen Jahren entwickelt, aber bisher noch nirgendwo in der Praxis getestet oder gar produktiv eingesetzt. Hingegen sind die transaktionsbezogenen EDI-Nachrichten sowohl im Bibliothekswesen als auch im Buchhandel in Europa vielfach im Einsatz.
- Source
- Zeitschrift für Bibliothekswesen und Bibliographie. 50(2003) H.4, S.210-214
-
Rusch-Feja, D.: Nationale und internationale Ansätze zur Vereinheitlichung von Metadaten im Bildungsbereich (2001)
0.02
0.017843753 = product of:
0.07137501 = sum of:
0.07137501 = weight(_text_:und in 6859) [ClassicSimilarity], result of:
0.07137501 = score(doc=6859,freq=16.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.48516542 = fieldWeight in 6859, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=6859)
0.25 = coord(1/4)
- Abstract
- Die vielen Aktivitäten und Anzahl von Partnern, die an diesen Gruppen teilnehmen, zeigen, wie wichtig die Vereinheitlichung und semantische Interoperabilität der Metadaten im Bildungsbereich ist. Beispiele der einzlnen Metadatensätze können den Konkordanzen von Stuart Sutton entnommen werden, wobei deutlich ist, dass die Übereinstimmung mancher Konzepte hinter den Metadatenkategorien nicht vorhanden ist. Die gemeinsame Arbeit der Gruppen hat gerade angefangen und in den nächsten Monaten können Ergebnisse dieser Standardisierungsprozesses erwartet werden
- Series
- Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis; 4
- Source
- Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
-
Oehlschläger, S.: Abschlussworkshop des Projektes META-LIB und 1. Metadaten-Workshop der Arbeitsstelle für Standardisierung Der Deutschen Bibliothek : Metadaten - Alter Wein in neuen Schläuchen? (2003)
0.02
0.017843753 = product of:
0.07137501 = sum of:
0.07137501 = weight(_text_:und in 2758) [ClassicSimilarity], result of:
0.07137501 = score(doc=2758,freq=16.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.48516542 = fieldWeight in 2758, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=2758)
0.25 = coord(1/4)
- Abstract
- Die seit Mitte der goer Jahre des letzten Jahrhunderts durch das Internet und vor allem durch das World Wide Web (WWW) entstandenen neuen Kommunikations- und Publikationsmöglichkeiten stellen die mit der Sammlung, Erschließung und Nutzbarmachung von Veröffentlichungen beauftragten wissenschaftlichen Bibliotheken vor neue Herausforderungen. Netzpublikationen erfordern die Überprüfung und Anpassung bestehender Methoden und Geschäftsgänge. Neben den bibliothekarischen Verfahren haben sich im WWW neue Vorgehensweisen herausgebildet. Um die Suche, die Identifikation und den Zugriff auf Netzpublikationen zu unterstützen, werden strukturierte Daten beigegeben: Metadaten. Außer den zum Auffinden der Ressourcen notwendigen Metadaten spielen weitere Arten von Metadaten für Netzpublikationen eine zunehmend wichtige Rolle.
- Source
- Zeitschrift für Bibliothekswesen und Bibliographie. 50(2003) H.4, S.179-181
-
Koch, G.; Koch, W.: Aggregation and management of metadata in the context of Europeana (2017)
0.02
0.017843753 = product of:
0.07137501 = sum of:
0.07137501 = weight(_text_:und in 4910) [ClassicSimilarity], result of:
0.07137501 = score(doc=4910,freq=16.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.48516542 = fieldWeight in 4910, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=4910)
0.25 = coord(1/4)
- Abstract
- Mit dem In-Beziehung-Setzen und Verlinken von Daten im Internet wird der Weg zur Umsetzung des semantischen Webs geebnet. Erst die semantische Verbindung von heterogenen Datenbeständen ermöglicht übergreifende Suchvorgänge und späteres "Machine Learning". Im Artikel werden die Aktivitäten der Europäischen Digitalen Bibliothek im Bereich des Metadatenmanagements und der semantischen Verlinkung von Daten skizziert. Dabei wird einerseits ein kurzer Überblick zu aktuellen Forschungsschwerpunkten und Umsetzungsstrategien gegeben, und darüber hinaus werden einzelne Projekte und maßgeschneiderte Serviceangebote für naturhistorische Daten, regionale Kultureinrichtungen und Audiosammlungen beschrieben.
- Source
- Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.170-178
-
Panskus, E.J.: Metadaten zur Identifizierung von Falschmeldungen im digitalen Raum : eine praktische Annäherung (2019)
0.02
0.017660735 = product of:
0.07064294 = sum of:
0.07064294 = weight(_text_:und in 452) [ClassicSimilarity], result of:
0.07064294 = score(doc=452,freq=12.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.48018923 = fieldWeight in 452, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=452)
0.25 = coord(1/4)
- Abstract
- In vielen Ländern erstarken populistische und rassistische Kräfte. Mit Polen und Ungarn schwächen selbst Mitglieder der Europäischen Union rechtsstaatliche Institutionen.[1] Die Türkei wendet sich immer stärker von der EU ab und driftet an den Rand einer Diktatur. In Österreich konnte ein Rechtspopulist nur knapp als Bundespräsident verhindert werden. All diese Ereignisse finden oder fanden auch wegen Missmut und Misstrauen gegenüber staatlichen und etablierten Institutionen wie klassischen Medien, Regierungen und der Wirtschaft statt.
-
Grossmann, S.: Meta-Strukturen in Intranets : Konzepte, Vorgehensweise, Beispiele (2001)
0.02
0.016691303 = product of:
0.06676521 = sum of:
0.06676521 = weight(_text_:und in 6775) [ClassicSimilarity], result of:
0.06676521 = score(doc=6775,freq=14.0), product of:
0.1471148 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0663307 = queryNorm
0.4538307 = fieldWeight in 6775, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=6775)
0.25 = coord(1/4)
- Abstract
- Die meisten Intranets stehen vor einem Informationsinfarkt - es fehlt in den Organisationen vielfach an klaren Rollenkonzepten zur Eingabe, Pflege und Weiterentwicklung der Intranets, vor allem aber auch an methodischen Grundsätzen zur Erfassung und Erschließung der verschiedenartigen Informationen. In diesem Beitrag werden die Grundkonzepte zur Meta-Strukturierung beschrieben, eine erprobte Vorgehensweise bei der Implementierung entsprechender Standards erarbeitet und zur besseren Illustration an konkreten Beispielen dargestellt
- Series
- Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis; 4
- Source
- Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt