Announcement

  • The live AACT database is temporarily unavailable because the daily update is running.

Researcher's Guide to Using Aggregate Analysis of ClinicalTrials.gov (AACT) Database

What is AACT?

AACT is the database for Aggregate Analysis of ClinicalTrials.gov. This version of AACT is a PostgreSQL relational database containing information about clinical studies that have been been registered at ClinicalTrials.gov. AACT includes all of the protocol and results data elements for studies that are publicly available at ClinicalTrials.gov. Content is downloaded daily from ClinicalTrials.gov and loaded into AACT.

What population of studies is represented in AACT?

All studies registered and publicly available in ClinicalTrials.gov are included in AACT. The ClinicalTrials.gov was released for the registration of studies on February 29, 2000. The registry accepts interventional studies in which participants are assigned according to a research protocol to receive specific interventions, as well as observational studies. It also includes Expanded Access records which describe the procedure for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy and who are unable to participate in a controlled clinical study.

The registration of studies and reporting of results and adverse events has been mandated to a large extent by requirements (both legal and institutional) implemented as part of the Food and Drug Administration Amendments Act (FDAAA), as well as by requirements introduced by the International Committee of Medical Journal Editors (ICMJE), the European Medicines Agency (EMA) and the National Institutes of Health (NIH) regarding registration and reporting of results of clinical studies. Table 1 describes the scope of these requirements.

Table 1: Scope of Interventional Studies Covered by Major Reporting Policies*

January 18, 2017. The policy is effective for applications for funding, including grants, other transactions, and contracts submitted on or after January 18, 2017. For the NIH intramural program, the policy applies to clinical trials initiated on or after January 18, 2017.
Timelines for registration and results/adverse event reporting are the same as for trials subject to FDAAA 801.
Policy Registration & Results Reporting Requirements Effective Date(s)

NIH Policy

Every clinical trial funded in whole or in part by NIH is expected to be registered on ClinicalTrials.gov and have summary results information submitted and posted in a timely manner, whether subject to FDAAA 801 or not.

January 18, 2017. The policy is effective for applications for funding, including grants, other transactions, and contracts submitted on or after January 18, 2017. For the NIH intramural program, the policy applies to clinical trials initiated on or after January 18, 2017. Timelines for registration and results/adverse event reporting are the same as for trials subject to FDAAA 801.

NCI Access Policy

The NCI issued its Policy Ensuring Public Availability of Results from NCI-supported Clinical Trials. Generally, for "all initiated or commenced NCI-Supported Interventional Clinical Trials whether extramural or intramural" (Covered Trials), "Final Trial Results are expected to be reported in a publicly accessible manner within 12 months of the Trial's Primary Completion Date regardless of whether the clinical trial was completed as planned or terminated earlier." This policy "will be incorporated as a Term and Condition of the award."

January, 2015

FDAAA and Final Rule

The following must be registered in ClinicalTrials.gov ('Applicable Clinical Trials (ACTs)':

  • Interventional studies of drugs, biologics, or devices (whether or not approved for marketing)
  • Studies phases 2 through 4
  • Studies with at least 1 US site or conducted under IND/IDE

Results and adverse event reporting is required for studies that meet the above registration requirements if they study drugs, biologics, or devices that are approved, licensed, or cleared by the FDA.

The Final Rule clarified the definition of an ACT, and expanded results and adverse events reporting requirements to include ACTs of unapproved products.

September 27, 2007. Studies initiated after this date, or with a completion date later than December 25, 2007 are subject to FDAAA requirements. Registration is required no later than 21 days after first patient is enrolled. Results and adverse events must be reported for these studies (if required) within 1 year of completing data collection for the pre-specified primary outcome.

For ACTs of devices not previously approved or cleared by FDA, public posting of registration information is delayed until after FDA approval/clearance.

September, 2008. Results reporting launched with optional adverse event reporting.

September, 2009. Adverse event information became required.

January 18, 2017. Final Rule for FDAAA 801 effective, with compliance expected as of April 18, 2017.

  • Responsible parties of ACTs of devices not previously cleared or approved by FDA may authorize NIH to post registration information prior to FDA approval/ clearance.
  • For ACTs of unapproved products, results reporting may be delayed for up to 2 additional years (i.e., up to 3 years total after the primary completion date).

ICMJE

Interventional studies of any intervention type, phase, or geographical location must be registered in ClinicalTrials.gov or other approved registry.
No results reporting requirements.

July 1, 2005. Studies initiated after this date must be registered before first patient enrolled; studies initiated before this date must be retrospectively registered to be considered for publication.

EMA

The following must be registered in ClinicalTrials.gov or other approved registry:

  • Interventional studies of drugs or biologics (whether or not approved for marketing)
  • Pediatric phase 1 studies;
  • Studies in phases 2 through 4
  • Studies taking place in at least 1 EU site

Results reporting required for all studies that meet registration requirements.

May 1, 2004. EMA launched EudraCT

March 22, 2011. EMA launched the EU Clinical Trials Register

October 11, 2013. EMA expanded EudraCT to include summary results.

* Adapted from The ClinicalTrials.gov results database – update and key issues and ClinicalTrials.gov summary of selected events, policies, and laws related to the development and expansion of ClinicalTrials.gov. For complete descriptions of policy requirements, see the references cited. EMA denotes European Medicines Agency; EU, European Union; FDAAA, Food and Drug Administration Amendments Act; ICMJE, International Committee of Medical Journal Editors; IDE, investigational device exemption; IND, investigational new drug application, NCI, National Cancer Institute; NIH, National Institutes of Health; US, United States.

Based on these policies, the following are examples of characteristics that may influence the likelihood that a study is included in the ClinicalTrials.gov registry:

Is the information in AACT up-to-date?

New content is downloaded from ClinicalTrials.gov and loaded into AACT every evening. These daily updates use the ClinicalTrials.gov RSS feed to identify studies that have recently been added or changed in ClinicalTrials.gov. New & modified studies are downloaded via the ClinicalTrials.gov API & loaded into AACT. Daily updates take anywhere from 20 minutes to several hours depending on how many studies have been added/changed. A full refresh of the AACT database will be run quarterly. The full-load takes approximately one day, during which time, the cloud-based version of AACT is not available. On the first of each month, a static copy of AACT is frozen and made available in 2 formats: 1) a PostgreSQL dump file and 2) a set of delimited text file. Current and previous copies are available on the AACT website, and may be useful for researchers seeking to run analyses off a static version of the database.

How are unique studies identified in AACT?

Studies registered at ClinicalTrials.gov are identified by a unique identifier, the NCT_ID. Because of the quality assurance measures applied by ClinicalTrials.gov staff on registration entries, we can be reasonably certain that each study (i.e., NCT_ID) entered in ClinialTrials.gov refers to a unique clinical study, however a small number of duplicate records may exist in the database.

How does content in AACT compare to what is in ClinicalTrials.gov?

AACT includes all of the protocol and results data elements for studies that are publicly available at ClinicalTrials.gov. All publicly available content from the current record for a study is included in AACT "as-is". In general, the content that is contained in the AACT database preserves the content in the source XML files that are downloaded from ClinicalTrials.gov and content is not cleaned or manipulated in any way. However, to help facilitate queries using AACT, several additional variables derived from the raw content are included in AACT. Derived variables are indicated as such in the data dictionary. The history of changes to a study record that are available at the ClinicalTrials.gov archive site is not included in the current version of AACT.

What types of questions can be investigated using ClinicalTrials.gov data?

The AACT database contains both ‘study protocol’ and ‘results data’ elements. The protocol (or registration) records describe the study characteristics including sponsor, disease condition, type of intervention, participant eligibility, anticipated enrollment, study design, locations, and outcome measures. Summary results data elements including participant flow, baseline characteristics, outcome results, and frequencies of serious and other adverse events are included in AACT. The article by Tse et al may be helpful in understanding the components of the basic results that are reported at ClinicalTrials.gov.

How can protocol/registration data be used?

We anticipate that investigators will use the current database to explore the characteristics of selected subsets of clinical studies (e.g., typical enrollment for a phase 3 study in breast cancer patients), and to compare and contrast these characteristics across different subgroups of studies (e.g., sponsor; device versus drug intervention; or prevention versus treatment).

How can results and adverse events data be used?

Researchers may be able to use the basic results and adverse events summary data reported at ClinicalTrials.gov for meta-analysis or systematic review (e.g., to compare the efficacy and safety of different types of diabetes therapies). However, because only a small subset of studies registered at ClinicalTrials.gov are required to report results, the results data from ClinicalTrials.gov will most likely be a useful supplement to traditional data sources used for a meta-analysis or systematic review, such as published and unpublished manuscripts and abstracts, rather than the core data source. Standard techniques for valid meta-analysis or systematic review (e.g., PRISMA statement) should be used when determining how to appropriately identify and aggregate summary data gleaned from ClinicalTrials.gov and/or literature.)

How should data elements be interpreted?

When interpreting this information, you’re encouraged to refer to the authoritative definitions provided by the National Library of Medicine (NLM). The most recent data element definitions are available on the NLM site for studies and results data. Data interpretation may depend on:

Note that the study record may be updated by the owner of the record at any time. Fields such as enrollment type may be changed from anticipated to actual, indicating that the value entered now reflects the actual rather than the planned enrollment. When data are downloaded, the result is a static copy of the database at that particular time point, and the history of changes made to the field is lost.

How complete and accurate are the data?

The presence of a record in a table indicates that information was submitted to ClinicalTrials.gov for at least one element in that table before the data were downloaded from ClinicalTrials.gov. Some data elements are more/less likely than others to have missing information, depending on several known factors. For example:

“Missingness” of data may also depend on other unknown factors. Regardless of the cause of missing data, users of ClinicalTrials.gov data sets are encouraged to specify clearly how missing values and “N/A” values are handled in their statistical analysis. For example, are studies with missing values excluded from statistics summarizing that data element, or are they included? In some cases, missing values may be imputed based on other fields (e.g., if a study has a single arm, it cannot employ a randomized design).

Although the FDAAA and other requirements do not apply to all fields in the database, users might consider including only studies registered post-FDAAA (September 2007), or studies with a primary completion date after December 2007. This will help to limit the number of missing values across many data elements. Users could also consider annotating data elements used in analysis according to whether or not they are FDAAA-required fields, if the user believes this might affect the extent of missing data.

Even when the data elements for a particular study are complete, users are cautioned to have modest expectations about their accuracy. In particular, results data posted at ClinicalTrials.gov may not be subject to the same level of critical scrutiny as results published in a peer-reviewed journal. As described by Zarin and colleagues in 'The ClinicalTrials.gov results database – update and key issues', ClinicalTrials.gov has implemented several measures to ensure data quality. For example, NLM staff apply automated business rules that alert data-providers when required elements are missing or inconsistent. In addition, some manual review is performed by NLM, and a record may be returned to the data-provider if revision is required. However, ClinicalTrials.gov staff cannot always validate the accuracy of submitted data (e.g., against an independent source). As Zarin et al. note, “… individual record review has inherent limitations, and posting does not guarantee that the record is fully compliant with either ClinialTrials.gov or legal requirements” [1]

During our own analysis of the ClinicalTrials.gov database, several extreme values for numeric data elements were encountered, such as an anticipated enrollment of several million subjects. Before proceeding with aggregate analysis, users are encouraged to review data distributions in order to select appropriate analysis methods, and to run their own consistency checks (e.g., to compare whether the number of arm descriptions provided for the study matches the data element that quantifies the number of arms in the study design) as needed.

Use of appropriate statistical inference

If the AACT results data are to be used to support a meta-analysis or systematic review of the safety or efficacy of a particular intervention, then standard methods of meta-analysis or systematic review (e.g., the PRISMA statement should be used to appropriately account for study-to-study variability and other sources of uncertainty or bias. We recommend that authors consider the following points when deciding whether to report p-values, confidence intervals, or other probability-based inference when performing aggregate analysis of the ClinicalTrials.gov database:

Is the data-generating mechanism random?

Methods of statistical inference such as p-values and 95% confidence intervals are most appropriate when used to quantify the uncertainty of estimates or comparisons due to a random process that generates the data. Examples of such processes include selection of a random sample of subjects from a broader population, randomly assigning a treatment to a cohort of subjects, or a coin toss about which we aim to predict future results.

In the following examples, we recommend against reporting p-values and 95% confidence intervals because the data generating mechanism is not random.

Example 1: Descriptive analysis of studies registered in the ClinicalTrials.gov database. In this case, the “sample” equals the “population” (i.e., the group about which we are making conclusions) and there is no role for statistical inference because there is no sample-vs-population uncertainty to be quantified.

Example 2: Descriptive analysis of the “clinical trials enterprise” as characterized by the studies registered in ClinicalTrials.gov. Despite mandates for study registration (Table 1), it may be that some studies that are required to be registered are not. In this case the sample (studies registered in ClinicalTrials.gov) may not equal the population (clinical trials enterprise). However, it is likely that those studies not registered are not excluded at random, and therefore neither p-values nor confidence intervals are helpful to support extrapolation from the sample to the population. To support such extrapolation, we recommend careful consideration of the studies that are highly likely to be registered (see section above on Population), and to limit inference to this population so that sample-vs-population uncertainty is minimal.

How can I objectively identify important differences?

In practice, p-values and confidence intervals are often employed even when there is no random data generating process to highlight differences that are larger than “noise” (e.g., authors may want to highlight differences with a p-value < .001). While this practice may not have a strong foundation in statistical philosophy, we acknowledge that many audiences (e.g., journal peer reviewers) may demand p-values because they appear to provide objective criteria for identifying larger-than-expected signals in the data. While we don’t encourage reporting of p-values for this purpose, we do encourage analysts to specify objective criteria for evaluating signals in the data. Examples are provided:

a) Prior to examining the data, specify comparisons of major interest, or quantities to be estimated.

b) Determine the magnitude of differences that would have practical significance. (e.g., a 25% difference in source of funding between studies of 2 pediatric Conditions, or a difference in enrollment of 100 participants).

c) Determine appropriate formulas for quantifying differences between groups or summarizing population variability. This quantification could take into account of the observed difference, variability in the data, and the number of observations. Examples are provided:

Specific tips for working with the AACT database

What were the primary considerations when designing the database?

When designing the database, we tried to balance the following objectives:

  • Present data exactly as it exists in ClinicalTrials.gov.
  • Make the information as easy to understand & analyze as possible.
  • Use consistent names and structures throughout the database. Make it predictable; minimize uncertainty.
  • Provide value-added attributes, identify them as such, and keep them separate from the raw ClinicalTrials.gov content. (The Calculated_Values table contains data elements that were derived from existing data.)

Naming Conventions

  • Table names are all plural. (ie. studies, facilities, interventions, etc.)
  • Column names are all singular. (ie. description, phase, name, etc.)
  • Table/column names derived from multiple words are delimited with underscores. (ie. mesh_term, first_received_date, number_of_groups, etc.)
  • Case (upper vs lower) is not relevant since PostgreSQL ignores case. Studies, STUDIES and studies all represent the same table and can be used interchangably.
  • Information about study design entered into ClinicalTrials.gov during registration is stored in AACT tables prefixed with Design_ to distinguish it from the results data. For example, the Design_Groups table contains registry information about anticipated participant groups, whereas the Result_Groups table contains information that was entered after the study has completed to describe actual participant groups. Design_Outcomes contains information about the outcomes to be measured and Outcomes contains info about the actual outcomes reported when the study completed.
  • Where possible, tables & columns are given fully qualified names; abbreviations are avoided. (ie. description rather than desc; category rather than ctgry)
  • Unnecssary and duplicate verbiage is avoided. For example: Studies.source instead of Studies.study_source
  • Columns that end with _id represent foreign keys. The prefix to the _id suffix is always the singular name of the parent table to which the child table is related. These foreign keys always link to the id column of the parent table.

    Child_Table.parent_table_id = Parent_Tables.id

    For example, a row in Facility_Contacts links to it’s facility through the facility_id column.

    Facility_Contacts.facility_id = Facilities.id

Structural Conventions

  • Every table has an nct_id column to link rows to its related study in the Studies table. All study-related data can be linked directly to the Studies table via the nct_id. (Note: The schema diagram omits several of the lines that represent relationships to Studies. This was done to avoid appearing complex and confusing. Relationships to the Studies table can be assumed since every table includes the NCT ID.)
  • Studies.nct_id = Outcomes.nct_id will link outcomes to their related study.

  • Every table has the primary key: id. (Studies is the one exception since it's primary key is the unique study identifier assigned by ClinicalTrials.gov: nct_id.)
  • Columns that end with _date contain date-type values.
  • Columns that contain month/year dates are saved as character strings in a column with a _month_year suffix. A date-type estimate of the value (using the 1st of the month as the 'day') is stored in an adjacent column with the _date suffix. (This applies to date values in the Studies table.)
  • Derived/calculated value are stored in the Calculated_Values table.

While we tried to rigorously adhere to these conventions, reality occassionally failed to cooperate, so compromises were made and exceptions to these rules exist. For example, to limit duplicate verbiage, we preferred the table name References over Study_References, however the word 'References' is a PostgreSQL reserved word and cannot be used as a table name, so Study_References it is.

How are arms/groups identified?

Considerable thought went into how to present arm and group information to facilitate analysis by simplifying naming and data structures while retaining data fidelity. NLM defines groups/arms this way:

  • Arm: A pre-specified group or subgroup of participant(s) in a clinical trial assigned to receive specific intervention(s) (or no intervention) according to a protocol.
  • Group: The predefined participant groups (cohorts) to be studied, corresponding to Number of Groups specified under Study Design (for single-group studies.

In short, observational studies use the term ‘groups’; interventional studies use ‘arms’, though for the purpose of analysis, they both refer to the same thing. Because 'group' is more intuitive to the general public, AACT standardized on the term 'group(s)' and does not use the term 'arms'.

Participant Groups: Registry vs Results

When a study is registered in ClinicalTrials.gov, information is entered about how the study defines partipant groups. In AACT, this information is stored in the Design_Groups table, while info about actual groups that is entered after the study has completed is stored in the Result_Groups table. (AACT has not attempted to link data between these 2 tables.)

Result information, for the most part, is organized in ClinicalTrials.gov by participant group. Result_Contacts & Result_Agreements are the only result tables not associated with groups. This section describes how AACT has structured group-related results data.

AACT provides four general categories of result information:

  • Participant Flow (Milestones & Drop/Withdrawals)
  • Baselines
  • Outcomes
  • Reported Events

The Result_Groups table represents an aggregate list of all groups associated with these result types. All result tables (Outcomes, Outcome_Counts, Baseline_Measures, Reported_Events, etc.) relate to Result_Groups via the foreign key result_group_id.

For example, Outcomes.result_group_id = Result_Groups.id.

ClinicalTrials.gov assigns an identifier to each group/result that is unique within the study. The identifier includes a leading character that represents the type of result (B for Baseline, O for Outcomes, R for Reported Event, and P for Participant Flow) followed by a number that uniquely identifies the group in that context. To illustrate... Study NCT001 had 2 groups: experimental & control, and reported multiple baseline measures, outcome measures, reported events and milestone/drop-withdrawals for each group. The following table illustrates how the Result_Groups table organizes the group information received from ClinicalTrials.gov in this case:

id nct_id result_type ctgov_group_code group title explanation
1 NCT001 Baseline B1 Experimental Group All Baseline_Measures associated with this study's experimental group link to this row.
2 NCT001 Baseline B2 Control Group All Baseline_Measures associated with this study's control group link to this row.
3 NCT001 Outcome O2 Experimental Group All Outcome_Measures associated with this study's experimental group link to this row.
4 NCT001 Outcome O1 Control Group All Outcome_Measures associated with this study's control group link to this row.
5 NCT001 Reported Event E1 Experimental Group All Reported_Events associated with this study's experimental group link to this row.
6 NCT001 Reported Event E2 Control Group All Reported_Events associated with this study's control group link to this row.
7 NCT001 Participant Flow P1 Experimental Group All Milestones & Drop_Withdrawals associated with this study's experimental group link to this row.
8 NCT001 Participant Flow P2 Control Group All Milestones & Drop_Withdrawals associated with this study's studies control group link to this row.

Notice that the integer in the code provided by ClinicalTrials.gov (ctgov_group_code) is often the same for one group across the different result types, but this is not always the case. In the example above, B1, E1 & P1 all represent the 'experimental group', so you're tempted to think that '1' equates to to the 'experimental group' for this study, however for Outcomes, O1 represents the control group. In short, the number in the ctgov_group_code often links the same group across all result types in a study, but for about 25% of studies, this is not the case, so it can't be counted on to indicate this relationship. (We had hoped to use a single row in Result_Groups to uniquely represent a participant group in the study and link all related results data (from the various tables) to that one row, however this was not possible. Therefore, one group will typically be represented multiple times in the Result_Groups table: once for each type of result data.

Information about dates

ClinicalTrials.gov has historically provided the month/year (without day) for several date values including start date, completion date, primary completion date and verification date. Because the 'day' was not provided, AACT stored these dates in the Studies table as character-type rather than date-type values. Character-type dates are of limited utility in an analytic database because they can't be used to perform standard date calculations such as determining study duration or the average number of months for someone to report results or identifying studies registered before/after a certain date.

NLM recently reported that ClinicalTrials.gov will start providing full date values (mm/dd/yy) for these date elements, however this only applies to new studies; pre-existing studies will continue to have only month/year date values. We considered various alternatives to handle dates given this issue. We decided to provide 2 columns in the Studies table for each date element: 1) a character-type column that displays the value exactly as it was received from ClinicalTrials.gov & 2) a date-type column that can be used for date calculations. If the date received from ClinicalTrials.gov has only month/year, in order to convert the string to a date, it is assigned the first day of the month. For example, a study with start date June, 2014 will have June, 2014 in the start_month_year column and 06/01/14 in the start_date column.

Information about trial sites (Facilities and Countries)

Information about organizations where the study is/was conducted (aka. facilities, trial sites) is stored in the Facilities table. This represents the facility information that was included in the study record on the date that information was downloaded from ClinicalTrials.gov.

AACT includes a Countries table, which contains one record per unique country per study. The Countries table includes countries currently & previously associated with the study. The removed column identifies those countries that are no longer associated with the study. NLM uses facilities information to create a list of unique countries associated with the study. In some cases, ClinicalTrials.gov data submitters subsequently remove facilities that were entered when the study was registered. Naturally these will not appear in AACT's Facilities table. If all of a country’s facilities have been removed from a study, NLM flags the country as ‘Removed’ which appears in AACT as Countries.removed = true.

The reasons facilities are removed are varied and unknown. A site may have been removed because it was never initiated or because it was entered with incorrect information. The recommended action for sites that have completed or have terminated enrollment is to change the enrollment status to “Completed” or “Terminated”; however, such sites are sometimes deleted from the study record by the responsible party. Data analysts may consider using Countries where removed is set to true to supplement the information about trial locations that is contained in Facilities, particularly for studies that have completed enrollment and have no records in Facilities.

Users who are interested in identifying countries where participants are being/were enrolled may use either the Facilities or Countries (where Countries.removed is not true) with equivalent results.

MeSH terms in Browse_Conditions and Browse_Interventions

When data submitters provide information to ClinicalTrials.gov about a study, they’re encouraged to use Medical Subject Heading (MeSH) terminology for interventions, conditions, and keywords. The Browse_Conditions and Browse_Interventions tables contain MeSH terms generated by an algorithm run by NLM. The NLM algorithm is re-run nightly on all studies in the ClinicalTrials.gov database, and sources the most up-to-date information in the study record, the latest version of the algorithm, and the version of the MeSH thesaurus in use at that time.

“Delayed Results” data elements are available in AACT

A responsible party of an applicable clinical trial may delay the deadline for submitting results information to ClinicalTrials.gov for up to two additional years if one of the following two certification conditions applies to the trial:

  • Initial approval: trial completed before a drug, biologic or device studied in the trial is initially approved, licensed or cleared by the FDA for any use.
  • New use: the manufacturer of a drug, biologic or device is the sponsor of the trial and has filed or will file within one year, an application seeking FDA approval, licensure, or clearance of the new use studied in the trial. A responsible party may also request, for good cause, an extension of the deadline for the submission of results.

Studies for which a certification or extension request have been submitted include the date of the first certification or extension request in the data element: Studies.received_results_disposit_date.

In general, the content that is contained in the AACT database preserves the content in the source XML files that are downloaded from ClinicalTrials.gov.

References

  1. Zarin, D. A., Tse, T. T., Williams, R. J., Califf, R. M., and Ide, N. C. (2011). The ClinicalTrials.gov results database – update and key issues. N Engl J Med 364: 852–60.
  2. Food and Drug Administration Amendments Act of 2007. Public Law 110-95.
  3. Laine, C., Horton R., DeAngelis C.D., et al. Clinical trial registration – looking back and moving ahead. N Engl J Med 356: 2734–6.
  4. Communication from the Commission regarding the guideline on the data fields contained in the clinical trials database provided for in Article 11 of Directive 2001/20/EC to be included in the database on medicinal products provided for in Article 57 or Regulation (EC) No 726/2004. In: European Commission, ed. Official Journal of the European Union, 2008. (2008/C 168/02.)
  5. Guidance on the information concerning paediatric clinical trials to be entered into the EU Database on Clinical Trials (EudraCT) and on the information to be made public by the European Medicines Agency (EMEA), in accordance with Article 41 of Regulation (EC) No 1901/2006. In: European Commission, ed. Official Journal of the European Union, 2009. (2009/C 28/01.)
  6. Zarin, D. A., Ide, N. C., Tse, T. et al. (2007). Issues in the registration of clinical trials. JAMA 297: 2112—2120.
  7. Moher, D., Liberati, A., Tetzlaff, J. and Altman, D. G. (for the PRISMA Group)(2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement, BMJ 339: 332—336.
  8. Tse, T., Williams, R. J., Zarin, D. A. (2009). Reporting “basic results” in ClinicalTrials.gov. CHEST 136: 295—303.