OLAC Record
oai:www.ldc.upenn.edu:LDC2008T02

Metadata
Title:GALE Phase 1 Arabic Blog Parallel Text
Access Rights:Licensing Instructions for Subscription & Standard Members, and Non-Members: http://www.ldc.upenn.edu/language-resources/data/obtaining
Bibliographic Citation:Ma, Xiaoyi, Dalal Zakhary, and Stephanie Strassel. GALE Phase 1 Arabic Blog Parallel Text LDC2008T02. Web Download. Philadelphia: Linguistic Data Consortium, 2008
Contributor:Ma, Xiaoyi
Zakhary, Dalal
Strassel, Stephanie
Date (W3CDTF):2008
Date Issued (W3CDTF):2008-03-19
Description:*Introduction* This file contains the documentation for GALE Phase 1 Arabic Blog Parallel Text, Linguistic Data Consortium (LDC) catalog number LDC2008T02, ISBN 1-58563-462-X. Blogs are posts to informal web-based journals of varying topical content. GALE Phase 1 Arabic Blog Parallel Text was prepared by the LDC and consists of 102K words (222 files) of Arabic blog text and its English translation from thirty-three sources. This release was used as training data in Phase 1 of the DARPA-funded GALE program. LDC has released the following GALE Phase 1 & 2 Arabic Parallel Text data sets: * GALE Phase 1 Arabic Broadcast News Parallel Text - Part 1 (LDC2007T24) * GALE Phase 1 Arabic Broadcast News Parallel Text - Part 2 (LDC2008T09) * GALE Phase 1 Arabic Blog Parallel Text (LDC2008T02) * GALE Phase 1 Arabic Newsgroup Parallel Text - Part 1 (LDC2009T03) * GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2 (LDC2009T09) * GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 1 (LDC2012T06) * GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 2 (LDC2012T14) * GALE Phase 2 Arabic Newswire Parallel Text (LDC2012T17) * GALE Phase 2 Arabic Broadcast News Parallel Text (LDC2012T18) * GALE Phase 2 Arabic Web Parallel Text (LDC2013T01) *Source Data* The task of preparing this corpus involved four stages of work: data scouting, data harvesting, formatting, and data selection. Data scouting involved manually searching the web for suitable blog text. Data scouts were assigned particular topics and genres along with a production target in order to focus their web search. Formal annotation guidelines and a customized annotation toolkit helped data scouts to manage the search process and to track progress. Data scouts logged their decisions about potential text of interest (sites, threads and posts) to a database. A nightly process queried the annotation database and harvested all designated URLs. Whenever possible, the entire site was downloaded, not just the individual thread or post located by the data scout. Once the text was downloaded, its format was standardized (by running various scripts) so that the data could be more easily integrated into downstream annotation processes. Original-format versions of each document were also preserved. Typically a new script was required for each new domain name that was identified. After scripts were run, an optional manual process corrected any remaining formatting problems. The selected documents were then reviewed for content suitability using a semi-automatic process. A statistical approach was used to rank a documents relevance to a set of already-selected documents labeled as good. An annotator then reviewed the list of relevance-ranked documents and selected those which were suitable for a particular annotation task or for annotation in general. Those newly-judged documents in turn provided additional input for the generation of new ranked lists. Manual sentence units/segments (SU) annotation was also performed on a subset of files following LDCs Quick Rich Transcription specification. Three types of end of sentence SU are identified: - statement SU - question SU - incomplete SU *Translation* After files were selected, they were reformatted into a human-readable translation format, and the files were then assigned to professional translators for careful translation. Translators followed LDCs GALE Translation guidelines, which describe the makeup of the translation team, the source, data format, the translation data format, best practices for translating certain linguistic features (such as names and speech disfluencies), and quality control procedures applied to completed translations. Translators were instructed to return a 50-sentence sample as soon as it was completed. The sample was reviewed by LDCs bilingual language specialists. Subsequent deliveries were subject to quality controls as described in the translation guidelines. Low quality translations were returned to the translators for revision. TDF Format All final data are in Tab Delimited Format (TDF). TDF is compatible with other transcription formats, such as the Transcriber format and AG format, and it is easy to process. Each line of a TDF file corresponds to a speech segment and contains 13 tab delimited fields: field data_type ----- --------- 1 file unicode 2 channel int 3 start float 4 end float 5 speaker unicode 6 speakerType unicode 7 speakerDialect unicode 8 transcript unicode 9 section int 10 turn int 11 segment int 12 sectionType unicode 13 suType unicode A source TDF file and its translation are the same except that the transcript in the source TDF is replaced by its English translation. Encoding All data are encoded in UTF8. *Sponsorship* This work was supported in part by the Defense Advanced Research Projects Agency, GALE Program Grant No. HR0011-06-1-0003. The content of this publication does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. *samples* For an example of the data in this corpus, please examine these screen captures(jpg) of the text: * source * translation
Extent:Corpus size: 6758 KB
Identifier:LDC2008T02
https://catalog.ldc.upenn.edu/LDC2008T02
ISBN: 1-58563-462-X
ISLRN: 461-663-437-911-1
DOI: 10.35111/x6pk-3q51
Language:Standard Arabic
English
Language (ISO639):arb
eng
License:LDC User Agreement for Non-Members: https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf
Medium:Distribution: Web Download
Publisher:Linguistic Data Consortium
Publisher (URI):https://www.ldc.upenn.edu
Relation (URI):https://catalog.ldc.upenn.edu/docs/LDC2008T02
Rights Holder:Portions © 2005-2007, 2008 Trustees of the University of Pennsylvania
Type (DCMI):Text
Type (OLAC):primary_text

OLAC Info

Archive:  The LDC Corpus Catalog
Description:  http://www.language-archives.org/archive/www.ldc.upenn.edu
GetRecord:  OAI-PMH request for OLAC format
GetRecord:  Pre-generated XML file

OAI Info

OaiIdentifier:  oai:www.ldc.upenn.edu:LDC2008T02
DateStamp:  2020-11-30
GetRecord:  OAI-PMH request for simple DC format

Search Info

Citation: Ma, Xiaoyi; Zakhary, Dalal; Strassel, Stephanie. 2008. Linguistic Data Consortium.
Terms: area_Asia area_Europe country_GB country_SA dcmi_Text iso639_arb iso639_eng olac_primary_text


http://www.language-archives.org/item.php/oai:www.ldc.upenn.edu:LDC2008T02
Up-to-date as of: Fri Dec 6 7:47:42 EST 2024