This books (Foundations of Clinical Research: Applications to Practice [PDF]) Made by Leslie Gross Portney About Books none To Download. [Matching item] Foundations of clinical research: applications to practice / Leslie G. Portney, DPT, PhD, FAPTA, Dean, School of Health and Rehabilitation Sciences, Boston, Massachusetts, Mary P. Watkins, DPT, MS, Professor Emerita, MGH Institute of Health Professions, Boston. Request PDF on ResearchGate | On Sep 30, , Orit Shechtman and others published Foundations of Clinical Research: Applications to Practice, 2nd Edition .
|Language:||English, Dutch, French|
|ePub File Size:||30.87 MB|
|PDF File Size:||19.72 MB|
|Distribution:||Free* [*Registration Required]|
Application to Practice | 𝗥𝗲𝗾𝘂𝗲𝘀𝘁 𝗣𝗗𝗙 on ResearchGate | On Jan 1, , LG Portney and others published Foundation of Clinical Research. Application to. download Foundations of Clinical Research Applications To Practice: Read 80 site Store Reviews - bacttemcocani.gq Get this from a library! Foundations of clinical research: applications to practice. [ Mary P Watkins;].
Second, the MNA does not require any laboratory data. The MNA does not require blood testing to evaluate the nutritional status of subjects, and thus, it is a cost-effective and noninvasive tool.
Third, the MNA is quick to administer. It can be completed in less than 15 minutes which improves administrative efficiency and reduces assessment burden on patients and clinicians [ 5 ]. Most importantly, the MNA is able to identify people at risk of malnutrition before weight loss occurs. The MNA score for these individuals is between 17 and The aforementioned advantages of the MNA make it worthy of application in both clinical and research settings.
The MNA has been validated in different populations. The MNA has been validated to be a useful tool in the identification of elderly home-care patients at risk of malnutrition [ 7 ]. It is considered a gold standard for free-living elderly [ 8 ] and for those living in long-term care facilities [ 9 , 10 ].
In addition, the MNA has been demonstrated to be feasible and reliable for older people with intellectual disabilities [ 9 ]. However, measurement properties tend to be sample-dependent, and thus the evidence established with frail and elderly people cannot be applied to patients with stroke [ 11 ]. In addition, to be clinically useful, a measurement tool must be scientifically sound in terms of psychometric properties.
Regarding the psychometric properties of the MNA, the correlation between the MNA and the Patient-Generated Subjective Global Assessment was high in screening malnourished elderly patients with stroke [ 12 ]. Thus, the MNA appears promising, but the other measurement properties particularly the test-retest reliability of the MNA remain unknown for patients with stroke. The test-retest reliability reflects the extent of agreement between repeated measures under similar assessment conditions [ 13 ].
A screening tool with acceptable test-retest reliability allows users to consistently identify those at risk for malnutrition [ 14 ]. It has been reported that nutritional status was poor in acute stroke patients regardless of level of severity [ 15 ] and some studies have reported a relationship between lower nutritional status and lower QOL in patients with cancer and elderly people [ 16 — 19 ].
However, the studies examining the relationship between nutritional status and QOL in patients with stroke are still fewer in numbers.
Knowledge of the relationship between nutritional status and QOL could guide clinicians to improve QOL in patients with stroke more effectively. It deals with the scope of the evaluation method and with how well the information gathered fully reflects the variable under study Criterion-Related Validity This type of validity can be determined by comparing a measurement with a particular factor or criterion, and it can be divided into predictive, concurrent, and prescriptive validity: Predictive validity Can be assessed by determining whether the predictions originating from the measurements come true.
In this case, the outcomes act as the criterion2. The predictive value of a positive test is the number of true positives divided by the total number of positive responses true and false positives. The predictive value of a negative test is the number of true negatives divided by the total number of negatives responses true and false negatives.
Foundations of clinical research : applications to practice
For example, the predictive validity of the Lachman's test was evaluated by Cooperman et al. In this study, the predictive value of a positive Lachman's test was defined as the likelihood that a subject with a positive test had an injured anterior cruciate ligament.
The predictive value of a negative test was defined as the likelihood that a subject with a negative Lachman's test did not to have an injured anterior cruciate ligament.
This means that in less that half of the cases when the Lachman's test was positive the subjects had an injured anterior cruciate ligament, while in 7 out of 10 cases with negative Lachman's test the ligament was not injured. Concurrent validity Evaluates whether measurements taken using different instruments agree with each other It is used to test whether a new instrument is interchangeable with an established "gold standard" Concurrent validity and parallel reliability are similar. The difference is that concurrent validity requires the reference measurements to come from a valid instrument, while parallel reliability only requires reliable measurements from the criterion instrument8.
Thus, reliability is a pre-requisite for validity. For example, the concurrent validity of the Lachman's test can be tested using the arthroscopy examination as a reference to evaluate anterior cruciate ligament injury because this examination is considered the valid gold standard, while the parallel reliability of this the Lachman's test can be tested using the Rolimeter because this instrument was shown to have high parallel reliability Prescriptive validity Is related to how appropriate it is to use a measurement to recommend a treatment It is determined by the positive or negative outcome of a prescribed treatment2,3.
For example, the Lachman's test was performed immediately before and 24 months after anterior cruciate ligament reconstruction in subjects with acute or chronic injury to the anterior cruciate ligament confirmed by magnetic resonance imaging and arthroscopy examination Two years after the reconstruction, no patient had a moderate or severe Lachman's test. Thus, a moderate or severe Lachman's test seems to have high prescriptive validity for success of reconstruction. Study validity refers to the internal and external validity of the study.
The external validity of a study is related to the extent to which the results of a study can be generalized outside the experimental situation. In other words, how much the research findings and conclusions of the study on a given sample are applicable to a larger population. Internal and external validity are related to the study itself and not specifically to the validity of the measurements taken.
The measurements of an instrument used for clinical evaluation needs to identify clinically significant differences between and within patients over time For example, the Lachman's test must detect meaningful changes in knee stability before and after treatment i. The measurements should show changes in the variable being assessed, but they should not be influenced by changes in other variables if the one under study remains stable does not change Instrument responsiveness includes the concepts of sensitivity and specificity.
Sensitivity The ability of an instrument to detect changes in the variable under study, when they occur, is defined as the instrument's sensitivity to change. For example, the ability of the Lachman's test to detect improvement in the knee instability after anterior cruciate ligament reconstruction in subjects with acute or chronic injury is a measure of its sensitivity Therefore, the measurements using Lachman's test is considered as having sensitivity because it was able to detect meaningful changes after treatment.
Specificity The stability or ability of an instrument measure not to change when no changes in the variable under study occur is defined as the instrument's specificity. Note that changes to other variables may happen, but if the variable of interest remains stable, the measurement taken should not change specificity.
Thus, specificity is related to the actual testing of the discriminate validity of a measure see construct validity. In other words, specificity evaluates if the variable under study is not influenced by changes in other variables. It can be considered as the actual analysis of the discriminate validity see construct validity. For example, the Lachman's test specificity can be analyzed by performing the test in subjects with non-injured anterior cruciate ligament but with injured posterior cruciate ligament.
The tests' results would have to be negative because there would be no anterior cruciate ligament injury and the test is supposed to detect injuries to this ligament and not injuries to the posterior cruciate ligament Sensitivity and specificity of diagnostic tests In studies involving a diagnostic test, such as the Lachman's test, the assessment of the instrument's sensitivity corresponds to the probability that the measures will detect a positive test among patients with disease or injury true positive test.
When a test incorrectly identifies problem as positive or negative, this is referred to as a false positive or false negative respectively. The sensitivity of a diagnostic test is calculated as true positive tests divided by true positives plus false negatives3.
If the test detects several positive tests in subjects without the dysfunction, the false positives increase and consequently, the credibility of the test decreases and it may not be useful for what it is trying to measure.
For example, when testing for knee instability using the Lachman's test, the sensitivity is represented by the ability of the test to detect positive results abnormal laxity in the anterior cruciate ligament in subjects who have the abnormality.
Kim and Kim5 evaluated the sensitivity of diagnostic tests for injuries of anterior cruciate ligament. The tests evaluated were the anterior drawer test, the Lachman's test, and the pivot shift test. One-hundred forty-seven patients with an injured anterior cruciate ligament confirmed by arthroscopy were tested under anesthesia before surgery.
However, in the Donaldson et al. This means that the Lachman's test was robust to overcome possible confounders presented by muscle contraction while the pivot shift test was not.
Instrument specificity is evaluated by the probability of a negative test among patients without disease or injury true negative test.
The specificity of a diagnostic test can be calculated as true negative tests divided by true negatives plus false positives3. For example, when testing for knee instability using the Lachman's test, the specificity is represented by the ability of the test to detect negative results normal laxity in subjects who are normal. When a study evaluates instrument responsiveness in a sample of subjects with a disease or dysfunction, only sensitivity can be tested because only false negatives can be found.
Without including normal subjects without the disease or dysfunction , it is impossible to analyze if the test has false positives.
[PDF Download] Foundations of Clinical Research: Applications to Practice (3rd Edition) [Download]
Both sensitivity and specificity of a test need to be known because a test should have low frequency of both false positives and false negatives in order to be useful in the decision making process. Table 1 presents the different measurements properties and examples for the Lachman's test. Selected interactions between measurements properties are presented in the following sections. Reliability and Validity Reliability is a pre-requisite for validity2.
A measure can be reliable but not valid10,15, however, a measurement cannot be valid if it is not reliable A measurement can only be considered valid when it has no systematic error or random error reliability. Consistent and reproducible measurements do not indicate that the variable of interest is in fact being measured.
For example, a questionnaire can have internal consistency homogeneity among items , but not content validity the items do not reflect the variable under study.
High reliability low random errors along with low systematic error results in validity assuming that the measurement in fact reflects what it is supposed to measure 8, It is important to note that the measurement properties can be tested separately, but they depend on each other to provide useful measures.
Validity and Responsiveness A measure that is valid at one point in time should also be valid at a different point in time. Consequently, in order to provide valid measurement, an instrument should be responsive to changes over time Thus, a valid measure needs to be responsive and have high reliability as discussed before. However, some measures can be valid only at one point in time. For example, the concurrent validity of the Lachman's test can be tested in normal subjects using arthroscopy.
The measures may be responsive and valid, but they may be different if a second Lachman's test was performed. According to Portney and Watkins3 and Hays and Hadorn19, responsiveness is one aspect of validity rather than a separate characteristic. Hays and Hadorn19 stated that the fact that an instrument's measurements are responsive to a clinical intervention supports the hypothesis that the instrument's measurements are valid. The instrument responsiveness adds longitudinal information i.
When selecting an instrument for evaluation, one should extend his or her concern for measurement validity beyond the face and construct validity to the discussion of the responsiveness of the instrument3. On the other hand, Guyatt et al. They stated that an instrument's measurements can have reliability, but no responsiveness; have responsiveness, but no validity; and have no reliability, but responsiveness.
Despite, all measurement characteristics that may affect the findings should be addressed in a study's discussion section in order to provide a full understanding of the strengths and weaknesses of the results and conclusions. The measurement properties may need to be evaluated separately depending on the objectives of the study.
Reliability and Responsiveness A measurement can have high reliability consistency , even when the instrument is unresponsive i. Conversely, a measurement can have low reliability, yet the instrument may be responsive Both reliability and responsiveness are pre-requisite of validity, but responsiveness is not a pre-requisite for reliability.
Some instruments are often used but their measurement characteristics are frequently overlooked and assumed to be adequate. However, this may not be true, and the measurements may have low quality resulting in insufficient statistical power. The power of a study is the extent to which an investigator can detect a difference when a true difference exists.
It may also be stated as the probability of correctly rejecting the null hypothesis that "there are no differences" when null hypothesis is false Thus, the power of a study relates to how capable the measurement is in detecting differences between or within groups.
Every study should have sufficient ability power to detect the effect caused by the application of an independent variable e. It should also have a low probability of a type I alpha error, which corresponds to rejecting the null hypothesis, when it should not be rejected for example: finding differences before and after a treatment when, in reality, there were no changes.
The determinants of the power of a study include: Sample size The larger the sample, the greater the power, because with a large sample, the general population is more likely to be represented. This means the study has greater external validity. Therefore, the results can be generalized to a larger population, and the true differences between groups are more likely to be recognized.
Effect size The larger the effect size, the greater the power.
Refine your editions:
Effect size is the magnitude of the difference before and after treatment, or between groups Thus, the power is influenced by the size of the effect of the experimental variable e. The larger the effect produced by a treatment on a given dependent variable e. Statistical significance level used a-level The higher the a-level i.
The a-level is the probability that the researcher is willing to accept that he or she might be wrong in rejecting the null hypothesis, or the extent to which the researcher could be wrong in saying that there are differences. The most common a-level used is 0.
Lowering the a-level reduces the chances of a type I error by requiring stronger evidence to demonstrate significant differences. Increasing the a-level makes it easier to find differences it may increase the power , but the probability that one will find a difference that actually does not exist type I error also increases.
Variability of the data The lower the variance, the greater the power. Variance is a measure of the variability within the group e.
The ability to detect a difference between groups is enhanced when the groups are distinctly different. When the variability within groups is large, the difference between groups will be less evident because the measurements may overlap.
This, in turn will lower the power or the ability to detect a difference Most of the determinants of power described above depend on the quality of the measurements. If the measurements' reliability, validity, and responsiveness are not adequate the study's ability to detect the effect of an independent variable e.
For example, the amount of variation in the scores obtained for a particular sample or group i. When the measurement error is big e. Thus, the effect size will be small decreasing the power of the study Lisa rated it really liked it Jan 01, Tyler Smith rated it it was amazing Aug 10, Artafia Hassan rated it really liked it Apr 10, Roxi Zilmer rated it it was amazing May 16, Jami Arnold rated it really liked it Oct 15, Debora rated it it was ok Aug 17, Rose rated it it was amazing Aug 08, Raenika rated it really liked it Sep 02, Elizabeth rated it liked it Aug 27, Anas Ababneh rated it really liked it Nov 08, Troutbird rated it it was amazing Dec 16, Marcyb rated it it was amazing Jun 07, Todd rated it it was ok May 11, Jeanine Stewart rated it really liked it May 27, Ana Claudia rated it it was amazing May 13, Aa rated it really liked it Sep 25, Susie rated it it was amazing Dec 31, Andrew Siu rated it it was amazing Mar 03, Bea rated it really liked it Oct 01, Isabel rated it it was amazing Sep 14, Jason rated it really liked it Sep 18, Sally Tice rated it it was amazing Jun 30, Jennifer Gordon rated it really liked it Jul 21, Artistic research[ edit ] The controversial trend of artistic teaching becoming more academics-oriented is leading to artistic research being accepted as the primary mode of enquiry in art as in the case of other disciplines.
Ideally, any measurement should be reliable, valid, responsive, practical, and easy to obtain. The MMSE was used to examine the cognitive function of participants. MediaObject , schema: Reliability and Responsiveness A measurement can have high reliability consistency , even when the instrument is unresponsive i. The larger the effect produced by a treatment on a given dependent variable e.