+ All Categories
Home > Documents > ECHOES - IASCT

ECHOES - IASCT

Date post: 05-Dec-2021
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
17
IASCT Newsletter For Private Circulation only www.iasct.net Contents: ConSPIC 2012 Project Tracking Validation in R Graphs using SG procedures and GTL Biomarkers and Targeted Therapy in Oncology Trials Editors of this issue Kailash Narayanan Mahendra Bijarnia Mansi Gandhi November, 2012:Volume 4,Issue 2 ECHOES
Transcript
Page 1: ECHOES - IASCT

IASCT NewsletterFor Private Circulation only

www.iasct.net

Contents:

ConSPIC 2012

Project Tracking

Validation in R

Graphs using SG procedures and GTL

Biomarkers and Targeted Therapy in Oncology Trials

Editors of this issue

Kailash Narayanan Mahendra Bijarnia Mansi Gandhi

November, 2012:Volume 4,Issue 2

ECHOES

Page 2: ECHOES - IASCT

Conference of Statistical Programmers in Clinical Research [ConSPIC] – Jaipur 2012

The Conference of Statistical Programmers in Clinical Research (ConSPIC) aims to provide a platform to share novel ideas in statistical programming, network and strengthen statistical programming community in India. The conference encourages statistical programmers from the healthcare industry to come together, and discuss efficient techniques and advances in statistical programming.

This was the second year that ConSPIC was held. The first edition, ConSPIC 2011 was held in Bangalore, and it was one of the first times that such an event was held in India. All the conferences until then were predominantly “invited” sessions, with thought-leaders from the industry giving talks from a 50,000 feet level. ConSPIC, for the first time, encouraged every day users, of all levels of seniority to come forth, and exchange ideas that had a direct impact on their everyday work.

Another “first” for ConSPIC 2012 was the poster competition. This was the first time that we organized a full-fledged poster competition, with 16 entries. Most of the people who attended the conference felt that putting together a poster was far more challenging (and fun) than giving a talk, and I hope that this will set the trend for much more participation in poster competitions in the years to come.

ConSPIC 2012 continued that innovative approach, and, having learnt from the previous year, attempted to improve upon it. ConSPIC 2012 had its own share of “firsts”: It was the first time that a conference was held in a city that did not have a strong local population of users/attendees. ConSPIC 2012 was held in Jaipur, and every single attendee to the conference lived and worked in a different city, and therefore had to travel to the conference. This had its own set of advantages: we learnt that by staying in the same hotel and spending some of the evenings together, we really got an opportunity to bond with one another, and make new friends from other companies.

The 2-day conference had a total of 44 presentations and 16 posters; there was also a pre-conference workshop on R a day before the conference started. The presentations were divided into 2 parallel sessions, which allowed the conference attendees to choose a topic of their interest, and attend that session which allowed for increased learning opportunities. The presentation and poster presentation topics focused on areas like increasing efficiency, enhancing quality, statistics, CDISC/ADAM, tips and tricks, and topics in Advanced SAS. The presentations were chosen from a total of 80+ abstracts that were received, and the choice was through a blinded review by a very able and experienced program committee comprising of Ajay Sathe (Cytel), Ilango Ramanujam (TAKE), Sangeetha Loganathan (Quintiles), Nikita Agnihotri (TCS), Murali Ganesan (Cognizant), and Madhura Shivaram (Accenture).

Page 3: ECHOES - IASCT

Along with all this work was also a lot of play, most of the conference attendees utilized the day before the conference started, or the day after the conference ended to take a tour of the city. Jaipur, known for its palaces and tourist attractions, offered many great sight-seeing opportunities, which the conference attendees utilized to the fullest. And no report of the conference could be complete without mentioning the dinner/dance party on the evening of Day 1. Watching senior leaders like Ajay Sathe, Ashwini Mathur, and Ilango Ramanujam dance to Bollywood and Rajasthani numbers was a sight to behold, and only possibly outdone by the excellent traditional “daal-baati-churma” which was part of the dinner fare. All in all, it was a most enjoyable evening.

So, with all this being so good, how does one proceed next year? The one thing that was disappointing in this year’s conference was the low attendance from some of the companies that have the highest number of associates working in this domain. Though the total number of conference attendees increased over last year to just over 100, the majority of the attendance was from a handful of companies operating in this space: large multinational companies, and some selected service provider organizations. The number of the number of attendees from some of the large service provider organizations, and even from Indian pharmaceutical companies continued to be very low. If this trend of lack of broad based participation continues, it will be difficult to fully realize the potential that such a conference can have, in terms of sharing and growing of ideas.

We made a good start. And we have followed it up with an even stronger second year. Next year’s conference, ConSPIC 2013 will be in Cochin. And I surely hope that the third edition of this unique conference is even bigger and better. Each one of the ConSPIC 2012 attendees have pledge to bring a friend to ConSPIC 2013. If this holds true, ConSPIC 2013 is already looking to be one of the most successful conferences in this domain. But for this to happen, I need your support. I hope I can have that. See you all in Cochin for ConSPIC,2013.

For more details, and pictures, please visit our FB page: http://www.facebook.com/#!/Conspic2012

Vishwanath Iyer (Mahesh)ConSPIC 2012 organizerSecretary, IASCT* The views and opinions expressed in this article belong only to the author, and should not be attributed to IASCT,or any organization with which the author is employed or affiliated.

Page 4: ECHOES - IASCT

ConSPIC - Potshots

Page 5: ECHOES - IASCT

Entering information in a project programming tracker is one of the menial tasks taking up time which causes hindrance for project leads to get accurate and up to date information of the project. So how about getting it done within a click? This presentation discusses a tool to solve this problem. QCCheck, a macro utility in conjunction with the power of ODS creates a fully automated project tracking spreadsheet to give a single shot view of any project in real time. This utility reduces the dependency on manual data and saves programmer’s precious time with the important features such as QC pass/fail results highlighted, hyperlinks for code/table/dataset and timestamps of QC & batch submit.

In a fast paced production environment of a clinical trial study, where timelines are short and last minute change requests are frequent, quick setup of the project, timely and accurate tracking of project status and quick wind-up of the project are vital to the timely delivery of the outputs for regulatory submissions.

This paper introduces a SAS utility that creates a project tracking sheet, performs the project startup related activities to eliminate manual entry of information, keep track of validation comparison status and re-validation requirements. This utility also ensures that a project team always has an up-to-date picture of validation status during the life cycle of a project.

METHOD

The utility is implemented for Windows SAS 9.2 and uses %QCCheck (a macro utility) in conjunction with the

ODS TAGSETS to create a fully automated project tracking spreadsheet (ProjectTrackit.XML) (Fig 1).

Bag it, Tag it & Put it: Project tracking one click away!

Fig 1: Automated Project tracking utility overview

ABSTRACT

INTRODUCTION

- Abhishek Bakshi [ Cytel ]

Page 6: ECHOES - IASCT

It’s a known fact that PROC COMPARE is the widely used tool in comparison of the two datasets. QCCheck uses the system defined macro variable SYSINFO (generated after execution of PROC COMPARE) to get the result of the comparison from multiple compares and places them in one common dataset (QCStatus.sas7bdat). This dataset contains information such as timestamp of execution, source dataset Pass/Fail and the result of comparison. (Fig 2)

1. %QCCHECK MACRO UTILITY

Fig 2: Output from %QCCheck (QCStatus.sas7bdat)

2. ODS TAGSETS.EXCELXP

By using SAS Output Delivery System (ODS) TAGSETS one can create a perfect data grid that which exploits the functionality of advance Microsoft Excel application. ODS TAGSETS.EXCELXP generates XML output which can be opened in Microsoft Excel (2003 or later). The data grids created can dynamically sort, filter, freeze rows and columns, create tables and panels, hide columns, apply styles, and various other Microsoft Excel/.NET functionalities.

3. PROJECTTRACKIT UTILITY1

ProjectTrackit uses %QCCheck & ODS TAGSET.EXCELXP to generate a fully automated project tracking sheet (XML).(Fig1) Once the ProjectTrackit is executed it asks user/lead to enter the allocations manually. Based on the allocation entered .sas files with header information are created. It creates the .sas file when it is run for the first time only. From the second run onwards, it checks whether %QCCheck output dataset (QCStatus.sas7bdat) exists or not. If it exists then it extracts most updated QC status and uses ODS TAGSETS.EXCELXP to arrange the output into a fully dynamic project tracking XML sheet (Fig 3)

Page 7: ECHOES - IASCT

Fig 3: Output from ProjectTrackit.sas (ProjectTrackIt.xml)

Single view of complete project statusOnce executed, ProjectTrackit gives one shot view of the complete project status containing all source/QC programming information at one given location which makes it easier forLead/programmer to track the progress of the project

Codes/Datasets/Outputs can be directly open from tracking sheetAs Codes/Datasets are hyperlinked columns these can be directly opened from the sheet which makes it more dynamic from manual tracking and also makes access to codes/dataset easier

Gives last modified date of codes for both source & QC programmer automaticallyhaving Last Modified date’ column saves a lot of time of both source/QC programmers to enter code completion date information manually (useful in case of lot of TLGs).

4. IMPORTANT FEATURES OF PROJECTTRACKIT

Page 8: ECHOES - IASCT

There are endless possibilities of attaching other “plug-in” utilities to this utility because project tracking sheet is a central location of storing vital information about the project components. The following are some of plug in utilities that are under development

A utility to extract CRF Annotation data from aCRF (pdf) to place domains name automatically in allocation sheet instead of entering manually Log Check columns for both source/QC codes to check whether log is error freeTo send email directly from SAS to source programmer notifying the QC Status, if it fails

Gives the information whether QC code is batch submitted correctly QC Retest (Pass/Fail) column helps to know whether QC code was submitted after the submission of the source programmer’s code

Displays the PROC COMPARE results, run time and links to compare output(.lst file)Proc Compare result column checks whether dataset is QC Passed or not and link associated to it gives the easy access to PROC COMPARE output

FUTURE ENHANCEMENTS

CONCLUSION

Automation of various project management activities can greatly enhance a team’s ability to timely and effectively submit the deliverables. This utility aims to achieve this goal and creates a tracking document at a central place where a lead programmer/manager can automate some routine tasks effectively and accurately track the project status

Page 9: ECHOES - IASCT

This article gives a brief introduction to clinical trials for “cosmetics” and how R can be used for validation of statistical analysis of such trials.

Cosmetics companies make fancy claims to grab our attention. Following are illustrations of claims made by companies:

ABSTRACT

INTRODUCTION

Product ClaimGarnier Light “Get up to 2 tones fairer in just 7 days”Lakme Absolute “Lasts up to 16 hrs”Olay Regenerist “Younger looking skin without syringes and scalpels”

However, in a recent news, FDA rebuked L’Oreal’s Lancôme USA subsidiary on marketing claims of its anti-wrinkle products. The claim gave an impression that their product worked more like a drug than a cosmetic product. Clearly an ambitious claim can get a company into difficulty with regulators. Thus the validity of these claims is very important. It is checked with clinical trials. Since these are for cosmetics, they may be called as “cosmetics trials”.

Conventional designs used in drug trials include “parallel design” and “cross-over design”. Parallel designs require large sample size to take care of large inter subject variability. Cross over designs are used to overcome this difficulty. In cross over designs the same subject receives different treatments in different periods separated by a washout period. The basic requirement for such designs is that the experimental conditions should remain same between periods. Suppose we are starting a trial for a fairness product in summer and the treatment period is of 12 weeks. By the time the first treatment period ends it may be monsoon, thus changing external / environmental conditions for the next product. This can be a challenge in cosmetics trials. In case of fairness products, special designs like half face design and forearm design take care of these limitations of standard designs.

Conventional designs:

In half face design a subject receives two products, one on either side of the face randomly. Thus each subject acts as her own control. If we want to compare more than two products “forearm designs” are used. In such designs a subject’s forearm is divided into many ‘treated’ and ‘untreated’ sites. Products are applied on the ‘treated’ sites whereas ‘untreated’ sites act as control.

Cosmetics trial designs:

Validation of Statistical Analyses of Clinical Trials in R-Tejasweeni

Page 10: ECHOES - IASCT

Effect of any product is measured through endpoint(s). Typical endpoints in case of cosmetics trials include “Skin color”, “Spot color”, “Under eye wrinkles”, “Roughness”, “Elasticity” etc. The actual measurement may be carried out either visually or using an instrument. Endpoints measured using fairness scale are called as visual endpoints. Usually the data on such endpoints is in the form of graded scores, hence non-parametric tests are more appropriate for analysis.

Endpoints:

In most CROs, there is a standard practice followed for quality assurance. The study is assigned to team of two. One is the developer, who analyzes data, prepares tables and report. The other is the validator who checks the results in the report. At Cytel, statistical analysis is done using software packages such as SAS, Minitab and JMP. However, ‘R software’ is considered useful for validation purpose due to its data handling and modeling capabilities.

Validation:

Validation procedure based on R starts with the validation of AD. First an AD is created in R and saved in the form of CSV file. This file is read in SAS and compared with developer’s AD using proc compare. A function called write.foreign is also available in R, using which data can be saved directly as a SAS dataset. The AD created is used for the further statistical analysis. The results obtained are again validated in R. In cosmetics trials the primary interest is in checking efficacy of the product and comparison with other product(s) (placebo or reference). Sometimes it is of interest to see how long the effect of product is retained after its discontinuation. Period after discontinuation is called “regression phase”.

Endpoints measured using instruments like corneometer or cutometer are called as instrumental endpoints. Generally the data on such endpoints are continuous and expected to follow normal distribution. So as a thumb rule, parametric tests are used for analysis.

Wilcoxon signed rank test is used for the analysis of:i.Balance checking at baseline (paired differences).ii.Efficacy of a product (change from baseline values).iii.Comparison of products (paired differences).

Null hypothesis in each case is that population median (of differences) is equal to zero. In case of data on visual endpoints the change from baseline scores are sometimes zero. Wilcoxon test ignores these zero values. This leads to reduction in effective sample size. Pratt and Lehmann introduced a modified version of this test which takes care of these zero values, called as “Wilcoxon Pratt Lehmann test”. There is no direct ‘proc’ available in SAS to carry out this test. Hence a program has been written for this test. However there is a direct function available in R.

Page 11: ECHOES - IASCT

In some situations paired differences cannot be used for comparison of visual data. Hence, two-sample non-parametric test i.e. Mann Whitney test, is used. During validation of results of this test the value of test statistic (W) from two software packages SAS and R did not match, however p-values did. After investigation it was found that this difference was due to the different approach used by the two software packages for calculating sum of scores (i.e. test statistic W). However, W (standardized) follows an asymptotically normal distribution denoted by Z. Since this Z turns out to be the same from two packages, the p-values match. Hence one has to be careful about inbuilt functions of different software packages.

Validation findings:

In conclusion, one can say that, it is always a good practice to use separate software package for validation purpose and R is a great choice.

Conclusion:

Page 12: ECHOES - IASCT

Graphs made simple using SG procedures & GTL: Taking help from ODS Graphics Infrastructure

-Harivardhan J [Ockham Development Group]

This article aims at providing a brief description about creating and managing graphs created by the new family of Statistical Graphs (SG) procedures, and the Graph Template Language (GTL) technique, introduced in SAS 9.2. The introduction of these procedures has made a provision for concise visual presentation and easy interpretation of large clinical trial data.

Abstract

In SAS 9.2, SAS/GRAPH introduces new procedures, designed to create graphs ranging from simple plots to advanced graphics, like multi cell layouts, all of which can be created using clear and concise syntax. These new procedures are as follows:

Introduction

• SGPLOT: To create individual plots and charts with overlay capabilities

• SGPANEL: To create paneled plots and charts by classification variables

• SGSCATTER: To create comparative scatter plot panels, with the capability to overlay fits andconfidence limits

• GTL and PROC SGRENDER: To define a graph template and create a graph, that is beyond thecapability of the procedures listed above

Method

Creating graphs using ODS Graphics Designer

1) SGPLOT

The SGPLOT procedure creates single-cell plots and charts with overlay capabilities, e.g., scatter, series, step, band, needle, box blot, histogram, dot plots, bar charts, normal curve, loess, regression, etc. Some examples have been presented below:

Frequency Distribution Histogram Vertical Box Plot proc sgplot data=<dataset name> title "<title>"; proc sgplot data=<dataset name>; histogram <variable name>; vbox <variable name> /category=<category name>; density <variable name>; xaxis display=’ <>’; density <variable name> / type=kernel; yaxis display=’ <>’; keylegend / location=inside position=topright; run;

Regression Line Comparing Two Line Plots proc sgplot; data=<dataset name> proc sgplot data= <dataset name>; reg x=<x-variable> y=<y-variable> series x=<x-variable> y=<y-variable>/ / CLM CLI; group=<variable name>; run; refline ‘<reference value>’/ axis=x;

keylegen/ position=topright across=1 location=inside; run;

Page 13: ECHOES - IASCT

The SGPANEL procedure creates paneled graphs by class variables. The plots within each panel are similar to the plots generated by the SGPLOT procedure. The following code can be used to compare the output produced by sgplot and sgpanel:

2) SGPANEL

proc sgplot data=<dateset name>; hbar <variable name> response=<variable name> stat=sum run;

proc sgpaneldata=<dataset name>;panelby <variable name>;hbar <variable name>/response=<variable name> stat=sum;run;

The SGSCATTER procedure creates paneled scatter plots, with overlay fits and confidences. Some examples have been presented below:

3) SGSCATTER

proc sgscatter data=<dataset name> (where=(where condition));plot <variable1>*<variable2> <variable3>*<variable4> / group=<variable name> ellipse; run;

Prediction Ellipse in a Scatter Plot

proc sgscatter data=<dataset name>; compare y=<y-variable> x=(x-variable1 xvariable2)/ reg=(cli clm) ;run;

Scatter Plot to Compare the Effect of multiple X- Variables on a Y- Variable

proc sgscatter data=raw.vs; matrix vshr vsdia vssys / ellipse=(type=predicted); run;

Prediction Ellipse in a Scatter Plot Matrix

proc sgscatter data=raw.vs; matrix vshr vsdia vssys / diagonal=(histogram normal) ellipse=(type=predicted); run;

Scatter Plot Ellipse and Frequency Distribution Histogram Matrix

GTL is the underlying language for default templates in SAS for procedures that use ODS Graphics. One can use any one of the following two approaches to define a graph template without writing GTL codes from scratch:

Graph Template Language

Approach 1:

Page 14: ECHOES - IASCT

The SG procedure can be used to generate the plot, and the underlying GTL syntax can be outputted into a graph template using TMPLOUT. The graph template can be customized as per requirement. Proc SGRENDER can be used to associate the template with a dataset for graph creation.

Approach 2:

Save and reuse the SAS/GRAPH code generated by ODS Graphics Designer, written using GTL and PROC SGRENDER. Generating the GTL code and reviewing it is a good way to learn GTL! The proc template code does not produce a graph, but creates and stores it in SASUSER.TEMPLAT item store by default.

To verify whether the template was created, the ODSTEMPLATE command can be used, which opens templates window where all item stores and its contents can be viewed. All STATGRAPH templates can be identified by a common icon. To produce a graph, the following SGRENDER procedure can be used:

ods listing close; ods pdf file=“<Path>" ; proc sgrender data=<dataset name> template=<STATGRAPH template to be used>; run; ods pdf close; ods listing; /* reopen the listing destination for subsequent output */

CONTROLLING OUTPUT

ODS GRAPHICS ON < / RESET IMAGEFMT= STATIC | GIF | PNG | JPEG | other-typesIMAGENAME= 'path-and-name' HEIGHT= size WIDTH= size /* default: HEIGHT=480px WIDTH=640px */

SCALE= ON | OFF BORDER= ON | OFF ANTIALIASING = ON | OFF IMAGEMAP = ON | OFF /* produces tooltips for HTML destination only */more-options >; procedures or data steps ODS GRAPHICS OFF;

Conclusion

The new features introduced in SAS 9.2 are powerful tools in creating better looking and easily interpretable graphics in comparison with traditional SAS/GRAPH. The ‘SG’ procedures and GTL techniques have resulted in improved quality of graphics, which is a step towards advancement over traditional SAS/GRAPH procedures. To view the entire presentation, please use the following link:

https://docs.google.com/open?id=0B5X2DGNJDMGwYW9Na2t3V0VSTWM

Page 15: ECHOES - IASCT

Biomarkers and Targeted Therapy in Oncology Trials- Nirupama Biswal [Novartis]

With the advent of Computational Biology and Molecular Genetics, understanding disease mechanism has improved over the past few years. Based on the underlying genetic pathways, various sub classification of any particular cancer type has become possible. Let us take the example of Breast Cancer. Based on the molecular status, it can be sub-classified as HER2 positive, ER/PR positive or Triple negative, etc. Anticancer drugs are developed against Breast Cancer in general. Irrespective of the molecular status, the same drug might be administered against all the sub-classes. Cancer drugs are well known for their toxicities and unwanted adverse events. A few of the patients reap greater benefits. But a vast majority has to bear the unwanted toxicities despite not benefitting much from the therapeutic intervention.

Here comes into picture molecular sub-typing with the help of certain agents known as Biomarkers. Biomarkers can be called as the molecular signature of a particular disease sub-type provided they are specific and sensitive enough. These help in identifying the category of patients that would benefit from the drug (prognostic and diagnostic) or those that would eventually develop resistance to the drug, etc. In the above case, ER/PR or HER2 can be considered as biomarkers for that particular cancer sub-type. Biomarkers also help us decide in the early phases which drug should go ahead and which drug to de-prioritize. For those drugs moving ahead, it further helps in deciding even the drug dosage and schedule in combination with pharmacokinetic studies.

By definition, a biomarker, or biological marker, is an indicator of a biological state. It is a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention. A simple example is that of temperature being a biomarker of fever.

Biomarker discovery, development and validation is a tedious process requiring expertise in multiple interdisciplinary fields. FDA has laid down guidelines for classifying a biomarker based on their degree of validity. It can be exploratory, probable valid and known valid. For an exploratory biomarker to achieve the status of a known valid biomarker through that of a probable valid it needs to be measured in an analytical test system with well-established performance characteristics and for which there is wide spread agreement in the medical or scientific community about the physiologic, toxicological, pharmacologic, or clinical significance of the test results. The qualification process is intended to bridge the gap from an exploratory to a known valid.

Once it is known valid, it becomes a knowledge based where the underlying disease mechanism and hence the cause effect relationship is already established.

The other one is a more statistical approach where we depend on microarray analysis of genetic expression profiles.

In the first approach, the biomarker can be a single protein or its corresponding DNA/RNA. The mechanism understood and robustness of that marker for that disease sub-type already established, here the focus directly shifts to assay development/validation. Say for example, the drug is targeting a particular onco-protein, this onco-protein can be assessed both at baseline and post-baseline to indicate the pharmacodynamic effects of the drug (Glivec targeting bcr-abl protein in acute myelogenous leukemia, AML)

<Ref 8>. Since the role of bcr-abl is already known, the assay to detect such protein is developed and the threshold limit of such normal protein evaluated by statistical methods. Biomarker assay development and method validation is a complex process that depends on a number of parameters from the choice of the matrix to maintaining sample integrity to assay standardization and accuracy.

Page 16: ECHOES - IASCT

Biomarker assays are characterized in terms of their sensitivity, specificity, limit of detection, limit of quantification and variability. An assay that is specific enough to detect the molecule of interest and sensitive enough to detect values at lower limit of quantification, etc or in other words a robust, reliable and reproducible assay fit for purpose needs to be developed. Specificity refers to the assay's ability to clearly distinguish the analyte of interest from structurally similar substances. Selectivity measures the degree to which unrelated matrix components cause analytical interference. Precision is determined by the assay's repeatability and reproducibility, which are factors used to quantitatively express the closeness of agreement among results of measurements performed under specific conditions.

For the other method where cause-effect relationship is unknown, a large scale screening of the gene expression profiling is done to distinguish between the differential expression of a set of genes in the normal and the diseased sample.

Gene expression profiling is one of the most significant technologic advances of this decade. In a typical research study, expression of several thousands of genes is measured simultaneously on a relatively few number of subjects. The first step is studying the expression profile and comparing with that of a control. Thereby the genes of interest are screened. The first level of validation happens thereafter, where methods like RT-PCR are used to short list a smaller set of genes from the 100s screened from micro-array. Thereafter robust statistical regression models are used to eliminate the noise genes and pin-point the still smaller set of genes of interest which are again subject to validation based on rigorous statistical methods. There are three approaches to validation: independent-sample validation, split-sample validation, and internal validation. Independent- sample validation is considered the gold standard of model validation techniques. This requires an independently obtained data set that was not used in developing the signature. Independent validation studies are sometimes performed by investigators other than those who developed the signature first. A close cousin of independent-sample validation is split sample validation. In this approach, the observations are randomly divided into two groups: one is used for model development and the other for validation. While this method gives an essentially honest picture of the predictive accuracy of a gene signature, it does not capture the between study variability that independent-sample validation can identify. For this reason, estimates obtained by split-sample methods remain slight overestimates of the true predictive performance.

Once a gene-signature for a particular subtype is obtained and validated thus, thereafter follows the clinical assay validation finally arriving at the clinical utility of the biomarker or gene signature this validated.

References:

1. Adv Cancer Res. 2007;96:269-98.http://download.bioon.com.cn/upload/201112/08222642_1394.pdf

2. Drug–Diagnostic Co-Development Concept paper, 2005. FDAhttp://www.fda.gov/cder/genomics/pharmacoconceptfn.pdf

3. http://dx.doi.org/10.1016/S0065-230X(06)96010-2

4. http://www.biomarkersconsortium.org

5. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2684735/

6. Advances in Cancer Research, Volume 96, 2006, Pages 269-298Jean W. Lee, Daniel Figeys, Julian Vasilescu doi:10.1182/blood-2003-06-2071

7. http://bloodjournal.hematologylibrary.org/.../blood-2003-06-2071.full.pdf

Page 17: ECHOES - IASCT

Dear Readers,

ECHOES team would like to hear from you regarding its contents and the style of presentation. Please feel free to send us your feedback in the form of small letters/notes. Few selected notes/letters will be published in the upcoming issue of ECHOES. You can also share your experiences of attending any national/international seminar or conference related to clinical research.

The letters/notes can be sent to [email protected];[email protected]; [email protected];

Looking forward to hearing from you.ECHOES Team.


Recommended