NurseNest
NurseNest
  1. Home
  2. /Pre-nursing
  3. /Lessons
  4. /Nursing Research & Statistics
Back to Modules

Nursing Research & Statistics

Loading progress…

Nursing Research & Statistics

Build the critical thinking skills to read, interpret, and apply research evidence in clinical nursing practice.

Foundations of Nursing Research

What Is Nursing Research?

Understanding the purpose and scope of research in nursing

Nursing research is the systematic investigation of phenomena relevant to nursing practice. It generates the evidence that guides clinical decisions, improves patient outcomes, and advances the profession.

Research

Generates NEW knowledge through systematic investigation. Asks: "What is true?"

Evidence-Based Practice

APPLIES existing research + clinical expertise + patient preferences to care decisions.

Quality Improvement

IMPROVES current processes using data. Focuses on systems, not generating generalizable knowledge.

Why This Matters

On the NCLEX, you'll encounter questions asking you to differentiate research from QI from EBP. Remember: research creates knowledge, EBP applies it, QI improves local processes.

Variables, Reliability & Validity

Core concepts that underpin all research

Independent Variable (IV)

What the researcher manipulates or studies. The presumed cause. Example: a new pain medication.

Dependent Variable (DV)

The outcome measured. What changes as a result. Example: patient-reported pain score.

Confounding Variable

An uncontrolled variable that may influence the outcome, threatening validity. Example: age differences between groups.

Reliability vs Validity

Reliability = consistency (does it give the same result each time?). Validity = accuracy (does it measure what it claims to?). A bathroom scale that always reads 5 lbs too heavy is reliable but not valid. A tool must be reliable before it can be valid.

Research Terminology

Match each research term to its definition

0/6 matched

Terms

Definitions

Levels of Evidence Pyramid

Knowledge Check: Research Foundations

1/5

A nurse reads a study that tested a new wound care protocol on 200 patients. This is an example of:

Core Statistics for Nurses

Descriptive vs Inferential Statistics

Two fundamental approaches to analyzing data

Descriptive statistics tell you what you see. Inferential statistics tell you what you can conclude.

Descriptive

Mean, median, mode, range, standard deviation, frequency distributions. Summarizes the sample.

Inferential

P-values, confidence intervals, t-tests, chi-square, ANOVA, regression. Makes conclusions about the population.

Mean, Median, Mode & Variability

Understanding central tendency and spread

Mean (Average)

Sum of all values ÷ number of values. Sensitive to outliers. Best for normally distributed data.

Median (Middle Value)

The 50th percentile. Not affected by extreme values. Best for skewed data (like hospital LOS).

Mode (Most Frequent)

The most commonly occurring value. Useful for categorical data (e.g., most common diagnosis).

Clinical Example

Hospital length-of-stay data is typically right-skewed (most patients stay a few days, but some stay weeks). The median is a better measure than the mean because it isn't inflated by a few very long stays.

Standard Deviation (SD)

SD measures how spread out data is from the mean. Small SD = data points cluster near the mean. Large SD = data is widely scattered. In a normal distribution, ~68% of values fall within ±1 SD, ~95% within ±2 SD, and ~99.7% within ±3 SD.

P-Values, Significance & Confidence Intervals

The most misunderstood concepts in statistics

What a P-Value Actually Means

A p-value is the probability of seeing results as extreme as (or more extreme than) what was observed, IF the null hypothesis were true. p = 0.03 means: 'If there truly were no effect, there's only a 3% chance we'd see results this extreme.' It is NOT the probability the treatment works.

Type I Error (α)

False positive: concluding an effect exists when it doesn't. Controlled by setting α (usually 0.05). "The boy who cried wolf."

Type II Error (β)

False negative: missing a real effect. Related to statistical power (1-β). Often caused by small sample sizes. "Missing the wolf."

Confidence Intervals

A 95% CI gives a range of plausible values for the true effect. If the CI for a treatment difference does NOT cross zero (for differences) or 1.0 (for ratios), the result is statistically significant. CI width reflects precision: narrow = more precise, wide = less certain.

Statistics Concept Matching

Match each statistical concept to its meaning

0/6 matched

Terms

Definitions

Knowledge Check: Core Statistics

1/5

A study reports p = 0.03. This means:

Research Design & Methodology

Quantitative vs Qualitative Research

Two complementary approaches to knowing

Quantitative

Uses numbers, measurements, and statistical analysis. Tests hypotheses. Objective. Answers "how much" and "how many."

Examples: RCTs, cohort studies, surveys with Likert scales

Qualitative

Uses words, themes, and narratives. Explores experiences. Subjective. Answers "how" and "why."

Examples: interviews, focus groups, ethnography, phenomenology

When to Use Each

Quantitative: 'Does hand hygiene education reduce infection rates?' (measurable outcome). Qualitative: 'What barriers do nurses experience with hand hygiene compliance?' (exploring experiences). Mixed methods combines both.

Study Design Hierarchy

Validity in Research Design

Can we trust the results?

Internal Validity

Are the results due to the intervention and not confounders? Strengthened by: randomization, blinding, control groups, standardized protocols.

External Validity

Can results be generalized to other populations/settings? Strengthened by: diverse samples, real-world settings, multi-site studies.

Common Threats to Validity

Internal: selection bias, attrition (dropouts), maturation, history (external events), Hawthorne effect. External: narrow inclusion criteria, single-site study, volunteer bias, artificial lab setting.

Knowledge Check: Research Design

1/4

Which study design provides the STRONGEST evidence for causation?

Evidence-Based Practice

The Three Pillars of EBP

Integrating evidence, expertise, and patient values

Best Research Evidence

Current, valid, clinically relevant studies. Higher levels of evidence preferred.

Clinical Expertise

The clinician's accumulated knowledge, skills, and judgment from education and experience.

Patient Preferences

Individual patient values, concerns, cultural beliefs, and expectations about care.

Clinical Scenario

Research shows compression stockings reduce DVT risk (evidence). You know this patient is post-op day 1 (expertise). But the patient refuses to wear them due to discomfort (preferences). EBP means integrating all three, perhaps exploring alternatives like SCDs or patient education.

Clinical vs Statistical Significance

Not every significant p-value matters at the bedside

The Difference

Statistical significance means the result is unlikely due to chance (p < 0.05). Clinical significance means the result is large enough to matter in practice. A drug that lowers HbA1c by 0.1% may be statistically significant with a huge sample but clinically meaningless. You would not change treatment for 0.1%.

Absolute vs Relative Risk Reduction

Always ask for the absolute numbers. "50% risk reduction" sounds impressive, but if the baseline risk was 2% and dropped to 1%, the absolute reduction is only 1%. NNT = 100 (treat 100 patients for 1 to benefit).

Knowledge Check: Evidence-Based Practice

1/3

Evidence-based practice integrates three components. Which is NOT one of them?

Critical Appraisal of Research

How to Read a Research Paper

A systematic approach for busy nurses

Follow this sequence when evaluating any research article

Steps for Appraising a Study

Recognizing Misleading Statistics

Common tricks and pitfalls in data presentation

Truncated Y-Axis

Starting the Y-axis at a value other than 0 makes small differences look dramatic. Always check axis scales.

Cherry-Picking Data

Reporting only favorable outcomes or subgroups. Look for pre-registered study protocols and intention-to-treat analysis.

Relative Risk Without Context

"Doubles your risk!" sounds alarming, but if baseline risk is 1 in a million, doubled is still 2 in a million. Always ask for absolute numbers.

Confusing Correlation with Causation

Ice cream sales and drowning rates both rise in summer, not because ice cream causes drowning, but because of the shared confounder (warm weather).

Knowledge Check: Critical Appraisal

1/3

When critically appraising a study, the FIRST question to ask is:

Applied Clinical Interpretation

Sensitivity, Specificity & Predictive Values

Understanding diagnostic test performance

Sensitivity (True Positive Rate)

How well does the test detect disease? High sensitivity → few false negatives → good for RULING OUT (SNout).

Specificity (True Negative Rate)

How well does the test exclude non-disease? High specificity → few false positives → good for RULING IN (SPin).

SNout & SPin

SNout: Sensitivity rules OUT. If a highly sensitive test is negative, you can rule OUT the disease. SPin: Specificity rules IN. If a highly specific test is positive, you can rule IN the disease. This is a high-yield NCLEX concept.

Positive Predictive Value (PPV)

If the test is positive, what's the probability the patient truly has the disease? Depends on disease prevalence.

Negative Predictive Value (NPV)

If the test is negative, what's the probability the patient truly doesn't have the disease? Also depends on prevalence.

NNT, Risk Communication & Clinical Decision-Making

Translating numbers into patient care

Number Needed to Treat (NNT)

NNT = 1 ÷ Absolute Risk Reduction. It tells you how many patients must receive the treatment for ONE additional patient to benefit. NNT of 10 is excellent (treat 10, 1 benefits). NNT of 500 means marginal benefit. Always compare NNT to NNH (Number Needed to Harm).

Risk Communication Tips

Use absolute numbers: "2 out of 100 patients experience this side effect" is clearer than "2% risk." Use natural frequencies. Avoid framing effects (presenting the same data positively or negatively to influence perception).

Avoiding Misinterpretation

Always ask: What's the baseline risk? What's the absolute difference? Is the confidence interval narrow? Is this population similar to my patient? Does clinical significance match statistical significance?

Applied Interpretation Matching

Match each clinical research concept to its application

0/6 matched

Terms

Definitions

Knowledge Check: Clinical Interpretation

1/3

A diagnostic test has 95% sensitivity and 60% specificity. This means:

Sampling, Ethics, Visualization & Synthesis

Sampling Methods

How participants are selected shapes the entire study

The way a researcher selects participants from a population determines whether findings can be generalized. The goal is a sample that accurately represents the target population.

Simple Random Sampling

Every member of the population has an equal chance of being selected (like drawing names from a hat). Minimizes selection bias and supports statistical generalization. Gold standard but often impractical in clinical research.

Stratified Sampling

The population is divided into subgroups (strata) based on a key characteristic (e.g., age, sex, diagnosis), then random samples are drawn from each stratum. Ensures proportional representation of important subgroups.

Convenience Sampling

Participants are selected based on easy availability (e.g., patients in your unit today). Most common in nursing research but highest risk of sampling bias. Results may not generalize beyond the immediate group.

Purposive (Purposeful) Sampling

Participants are deliberately chosen because they have specific characteristics or experiences relevant to the study. Common in qualitative research. Example: selecting only nurses who have experienced moral distress.

Why Sample Size Matters

Larger samples increase statistical power (the ability to detect a real effect) and produce narrower confidence intervals (more precise estimates). Small samples risk Type II errors (missing real effects) and may not capture population variability. Researchers use power analysis before a study to calculate the minimum sample needed.

Sampling Bias

Sampling bias occurs when certain members of the population are systematically more or less likely to be selected. This threatens external validity, your results may not apply to the broader population. Examples: volunteer bias (only motivated people enroll), non-response bias (those who don't respond differ from those who do), and selection bias from convenience sampling.

Ethical Considerations in Research

Protecting participants is a non-negotiable foundation

Research ethics exist because of historical abuses, the Nazi experiments, the Tuskegee Syphilis Study , and others. Modern ethical frameworks ensure research never exploits participants.

Informed Consent in Research

Research informed consent requires: (1) disclosure of purpose, procedures, risks, benefits, and alternatives; (2) participant comprehension; (3) voluntary agreement without coercion. Participants must know they can withdraw at any time without penalty to their care.

IRB / REB Review

An Institutional Review Board (IRB) in the U.S. or Research Ethics Board (REB) in Canada must review and approve all human subjects research BEFORE data collection begins. They evaluate risk-benefit ratios, consent processes, confidentiality protections, and safeguards for vulnerable populations.

The Belmont Report: Three Core Principles

Vulnerable Populations

Vulnerable populations require additional ethical protections because they have diminished capacity to give truly voluntary consent. This includes: children (require parental consent plus child assent), pregnant women, prisoners, cognitively impaired individuals, economically disadvantaged persons, and those in dependent relationships (e.g., students, employees). IRBs/REBs apply heightened scrutiny to studies involving these groups.

Data Visualization & Interpretation

Reading graphs correctly is a critical research literacy skill

Bar Charts

Display categorical data using rectangular bars. Bar height (or length) represents frequency or value. Bars are separated by gaps. Best for comparing discrete groups (e.g., infection rates by unit, diagnoses by type). Always check: Does the Y-axis start at 0?

Histograms

Display the distribution of continuous numerical data. Bars are adjacent (no gaps) because the X-axis represents a continuous scale divided into intervals (bins). Reveals shape of distribution: normal, skewed left, skewed right, bimodal. Example: distribution of patient ages in a study.

Scatter Plots

Show the relationship between two continuous variables using individual data points plotted on X-Y axes. Reveal correlations (positive, negative, none), outliers, and the strength of relationships. A trend line may be added. Example: plotting hours studied vs exam scores.

Interpreting Graphs: Key Questions

When reading any graph, systematically ask: (1) What variables are on each axis? (2) What are the units? (3) Does the Y-axis start at 0, or is it truncated? (4) Are the intervals equal? (5) Is the scale linear or logarithmic? (6) What is the sample size? (7) Are error bars or confidence intervals shown? Missing any of these can lead to misinterpretation.

Misleading Visual: Truncated Axes

A Y-axis starting at 98 instead of 0 can make a temperature change from 98.6°F to 99.2°F appear enormous. Always look at the actual numerical difference, not just the visual size of bars or lines.

Misleading Visual: Unequal Intervals

If X-axis intervals are 1, 2, 5, 10, 50, a linear-looking trend may actually represent exponential growth. Verify that axis intervals are consistent before drawing conclusions about rates of change.

Misleading Visual: 3D Charts & Pictographs

Three-dimensional bar charts distort visual perception, rear bars appear smaller. Pictographs that scale both width and height make differences appear squared. Stick to simple 2D charts for accurate comparison.

Systematic Reviews & Meta-Analysis

The pinnacle of the evidence hierarchy

A systematic review synthesizes all available evidence on a question. When it includes statistical pooling of results, it becomes a meta-analysis.

How a Systematic Review Is Conducted

Forest Plots

The primary visual output of a meta-analysis. Each horizontal line represents one study: the square is the point estimate (effect size), the line is the confidence interval, and the square's size reflects the study's weight. The diamond at the bottom represents the pooled (combined) effect. A vertical line of no effect (0 for mean differences, 1.0 for ratios) helps determine significance.

Heterogeneity (I²)

Measures how much variation across studies is due to real differences rather than chance. I² = 0% means no heterogeneity (studies agree). I² > 50% suggests substantial heterogeneity, the pooled result should be interpreted cautiously. Sources include differences in populations, interventions, or outcome measures.

Pooled Effect Size

The combined estimate from all studies, weighted by study size and precision. Represented by the diamond in a forest plot. A narrow diamond indicates a precise pooled estimate. If the diamond does not cross the line of no effect, the overall result is statistically significant.

Why Systematic Reviews Are Powerful

By combining data from multiple studies, systematic reviews and meta-analyses increase statistical power, improve precision of effect estimates, resolve conflicting results from individual studies, and reduce the impact of bias from any single study. They sit at the top of the evidence pyramid and are the foundation of clinical practice guidelines.

Knowledge Check: Advanced Research Topics

1/5

A researcher recruits participants from a single hospital's waiting room during morning hours only. This is an example of:

Advanced Research Concepts

Match each research concept to its correct definition

0/8 matched

Terms

Definitions

Save your progress across devices

Guest access stays fully free. Create a free account to keep module completion and study preferences synced on every device. No paid subscription is required for Pre-Nursing.

Create free accountSign in

Your progress · Nursing Research & Statistics

Pre-Nursing stays free. Progress is optional.

0% of modules

Start your first module to build momentum and unlock personalized recommendations.

Suggested next in sequence: Study & Cognitive Strategies

Stay in Pre-Nursing

  • Browse all modules (paginated)
  • Target date & unsure pacing
  • Med math tools

Ready for exam-style prep

Paid NurseNest plans add full question banks, mocks, and pathway-scoped lessons once you are comfortable with the basics here.

  • Compare plans
  • Browse exam lesson hubs
  • Explore NCLEX & RN/PN pathways

Set a likely route on the study planning page to personalize these links.

Focus on foundations here; we’ll keep exam prep one click away.

Get clinically useful questions in your inbox

Choose how often you hear from us. Unsubscribe anytime.

NurseNest

Exam-focused prep for RN, PN / RPN, NP, and Allied Health learners across the United States and Canada.

Exam Pathways

  • RN
  • PN / RPN
  • NP
  • Allied Health

Explore

  • Pricing
  • Lessons
  • Practice Questions
  • Blog
  • Tools

Account

  • Login
  • Start Practicing
  • Contact Support

Study Nursing in Your Language

View All Languages →
© 2026 NurseNest. All rights reserved.
NurseNest provides educational content for exam preparation and is not affiliated with NCLEX, regulatory colleges, or licensing bodies.