CFA Level 1 Quantitative Methods: Complete Study Guide (2026)
All 12 Learning Modules at a Glance
| Module | Title | Exam Priority |
|---|---|---|
| LM 1 | Rates and Returns | High |
| LM 2 | Time Value of Money in Finance | High |
| LM 3 | Statistical Measures of Asset Returns | High |
| LM 4 | Probability Trees and Conditional Expectations | Medium |
| LM 5 | Portfolio Mathematics | High |
| LM 6 | Simulation Methods | Medium |
| LM 7 | Estimation and Inference | High |
| LM 8 | Hypothesis Testing | High |
| LM 9 | Parametric and Non-Parametric Tests of Independence | Medium |
| LM 10 | Simple Linear Regression | Medium |
| LM 11 | Introduction to Big Data Techniques | Low |
| LM 12 | Appendices (Statistical Tables) | Reference only |
LM 1: Rates and Returns
This is where everything begins. You need to understand not just what an interest rate is, but how to decompose it into its building blocks: real risk-free rate, inflation premium, default risk premium, liquidity premium, and maturity premium. The curriculum then walks through every major return measure you’ll use throughout the CFA program.
Return Measures You Must Know
Holding period return (HPR) is the most basic: total gain or loss over a single period, expressed as a percentage of the beginning value. Simple, but it’s the building block for everything else.
Arithmetic mean return is the simple average of periodic returns. It’s easy to compute but overstates the true growth rate of an investment over time because it ignores compounding. Geometric mean return accounts for compounding and gives you the true annualized growth rate — it’s always less than or equal to the arithmetic mean (they’re equal only when every period’s return is identical).
The harmonic mean is less intuitive but shows up in specific contexts like dollar-cost averaging. It gives the lowest value of the three averages and is most appropriate when averaging rates or ratios.
Money-weighted vs. time-weighted return — this distinction is heavily tested. Money-weighted return is essentially the internal rate of return (IRR) of all cash flows into and out of the portfolio. It reflects the actual investor experience, but it’s heavily influenced by the timing and size of cash flows. Time-weighted return eliminates the impact of external cash flows and measures the manager’s investment skill. If you’re evaluating a portfolio manager, use time-weighted; if you’re evaluating a specific investor’s experience, use money-weighted.
The module also covers annualizing returns across different compounding frequencies, continuously compounded returns (using the natural log), and the distinctions between gross/net, pre-tax/after-tax, real/nominal, and leveraged returns. For the formula sheet, make sure you can convert between any compounding frequency.
LM 2: Time Value of Money in Finance
TVM is the single most important concept in finance — and it gets its own dedicated deep-dive page on this site. This module applies TVM specifically to fixed-income and equity instruments: pricing bonds from coupon cash flows, valuing stocks using the dividend discount model, and calculating implied returns and growth rates.
The key framework is cash flow additivity: the value of any financial instrument equals the present value of its expected future cash flows. The curriculum extends this to three powerful applications: implied forward rates (critical for Fixed Income), forward exchange rates using no-arbitrage conditions (connects to Economics), and option pricing using cash flow additivity (sets up Derivatives).
LM 3: Statistical Measures of Asset Returns
This module covers central tendency, dispersion, and distribution shape — the three pillars of descriptive statistics as applied to financial data.
Central tendency: Mean, median, and mode. The curriculum also covers dealing with outliers (when to use trimmed or winsorized means) and measures of location like percentiles and quartiles. The interquartile range (IQR) is particularly useful for identifying outliers in asset return distributions.
Dispersion: Range, mean absolute deviation, variance, and standard deviation. The critical distinction for the exam: sample variance divides by (n − 1), population variance divides by N. Standard deviation is the square root of variance and has the same units as the data itself — making it more interpretable than variance.
The coefficient of variation (CV = standard deviation / mean) allows you to compare risk per unit of return across investments with different expected returns. The Sharpe ratio (excess return / standard deviation) is the most common risk-adjusted return measure — know it cold.
Distribution shape: Skewness measures asymmetry — positive skew means the right tail is longer (more extreme positive outcomes), negative skew means the left tail is longer (more extreme losses). Kurtosis measures tail thickness — leptokurtic distributions (excess kurtosis > 0) have fatter tails than a normal distribution, meaning more probability in the extreme outcomes. This matters enormously for risk assessment: a portfolio with leptokurtic returns has more downside risk than standard deviation alone suggests.
The module finishes with covariance and correlation between two variables, including scatter plots, properties of correlation, and the critical limitations of correlation analysis (it doesn’t imply causation, it only captures linear relationships, and outliers can distort it).
LM 4: Probability Trees and Conditional Expectations
This module covers the probability tools you need for scenario analysis — see the dedicated probability concepts page for a deeper dive.
Expected value and variance under different economic scenarios (boom, normal, recession) are calculated using probability-weighted averages. Probability trees visualize multi-step probability problems and make conditional expectations tractable.
The total probability rule lets you calculate an unconditional expected value by summing probability-weighted conditional expected values across mutually exclusive scenarios. This is how analysts build expected return estimates that account for multiple economic outcomes.
Bayes’ formula is the exam question most candidates dread. It updates a prior probability estimate with new information to produce a posterior probability. The classic setup: given a positive signal, what’s the actual probability the underlying event is true? Work through several practice problems until the update formula feels mechanical. Probability trees make Bayes’ problems much more manageable.
LM 5: Portfolio Mathematics
This module is the quantitative backbone of Portfolio Management. You’ll learn to calculate portfolio expected return and variance for two-asset and multi-asset portfolios, with heavy emphasis on how correlation drives diversification benefits.
Since Cov(R₁,R₂) = ρ₁₂σ₁σ₂, the formula can also be written with correlation directly. The key insight: when correlation (ρ) is less than +1, portfolio risk is less than the weighted average of individual risks. That’s the mathematical proof of diversification. When ρ = −1, you can theoretically construct a zero-risk portfolio.
The module extends to forecasting covariance using joint probability functions and covers portfolio risk applications of the normal distribution — including safety-first ratios and shortfall risk. These concepts feed directly into the efficient frontier and Capital Allocation Line you’ll encounter in Portfolio Management.
LM 6: Simulation Methods
This module introduces the lognormal distribution, continuously compounded returns, Monte Carlo simulation, and bootstrapping. The exam focus is conceptual — understanding when and why each technique is used, not running actual simulations.
Lognormal distribution: Asset prices are often modeled as lognormally distributed because prices can’t go below zero. Continuously compounded returns are normally distributed, which means the price levels they generate are lognormal. Know this relationship.
Monte Carlo simulation generates thousands of random scenarios from assumed distributions to model complex outcomes — useful for option pricing, VaR estimation, and retirement planning. Bootstrapping resamples from actual historical data (with replacement) to generate empirical distributions without assuming a parametric form. Know the advantages and limitations of each: Monte Carlo requires distributional assumptions; bootstrapping doesn’t, but depends on the historical sample being representative.
LM 7: Estimation and Inference
This module covers sampling methods, the central limit theorem, and the construction of confidence intervals — the bridge between descriptive statistics and hypothesis testing.
Sampling methods: Simple random sampling (every member has equal probability), stratified random sampling (divide population into subgroups, sample from each), cluster sampling (randomly select clusters, then sample within them), and non-probability sampling methods. Know the advantages of stratified sampling for financial applications — it ensures representation from each subgroup, like market cap segments.
Central limit theorem (CLT): Regardless of the population distribution, the sampling distribution of the mean approaches normality as sample size increases (n ≥ 30 is the conventional threshold). This is why we can use normal-distribution-based tests even when the underlying data isn’t normally distributed.
Standard error of the sample mean = σ / √n. As sample size grows, the standard error shrinks, meaning our estimate of the mean becomes more precise. This is the foundation for confidence intervals: a 95% confidence interval is the sample mean ± 1.96 standard errors (using the z-distribution).
The module also covers bootstrapping applied to empirical sampling distributions — a practical alternative when parametric assumptions are questionable.
LM 8: Hypothesis Testing
One of the most formulaic and testable modules in all of Quant — see the dedicated hypothesis testing page for a deeper treatment.
The process follows a mechanical five-step framework: state the null and alternative hypotheses, select the test statistic, determine the decision rule (significance level and critical value), calculate the test statistic from your data, and make a statistical decision.
Type I error (rejecting a true null — a false positive) vs. Type II error (failing to reject a false null — a false negative). The significance level (α) is the probability of Type I error. Power (1 − β) measures the test’s ability to correctly detect a real effect.
The curriculum covers tests for finance applications specifically: tests of mean returns, tests of differences between means (independent and dependent samples), and tests of equality of variances (using the F-distribution). Know how to run one-tailed and two-tailed tests, and know both the p-value approach (reject if p < α) and the critical value approach.
LM 9: Parametric and Non-Parametric Tests of Independence
This module extends hypothesis testing to correlation testing and contingency tables. You’ll learn the parametric test of a correlation coefficient (testing whether ρ = 0), the Spearman rank correlation coefficient for non-parametric data, and chi-square tests of independence using contingency table data.
For the exam, the key question is: when do you use parametric vs. non-parametric tests? Parametric tests assume a specific distribution (typically normal); non-parametric tests don’t. Use non-parametric tests when: the data is ranked or ordinal, the sample is small, or the normality assumption is clearly violated.
LM 10: Simple Linear Regression
This module introduces the regression framework you’ll use throughout the CFA program (it expands significantly at Level 2). You’ll learn to estimate a regression line (Y = b₀ + b₁X), interpret the slope and intercept, and evaluate model quality.
Four assumptions of simple linear regression: linearity, homoskedasticity (constant variance of errors), independence of errors, and normality of errors. Know what happens when each assumption is violated — heteroskedasticity, for example, doesn’t bias the coefficient estimates but does invalidate the standard error calculations.
Goodness of fit: R² measures the proportion of variation in Y explained by X. An R² of 0.75 means 75% of the variability in the dependent variable is explained by the independent variable. But R² alone doesn’t tell you if the relationship is statistically significant — you need the t-test on the slope coefficient for that.
The curriculum also covers ANOVA tables, prediction intervals, and functional forms (log-lin, lin-log, log-log models) — know how to interpret the slope in each form.
LM 11: Introduction to Big Data Techniques
A newer addition to the CFA curriculum. It covers fintech applications in investment analysis, big data characteristics (volume, velocity, variety), artificial intelligence and machine learning (supervised vs. unsupervised learning), data processing methods, data visualization, and text analytics/natural language processing.
The exam focus is conceptual — you won’t be asked to code anything. Know the difference between overfitting and underfitting, training vs. test sets, and the bias-variance tradeoff. Understand that supervised learning uses labeled data to predict outcomes, while unsupervised learning finds patterns in unlabeled data (like clustering). Don’t over-invest study time here — a solid conceptual understanding is sufficient.
LM 12: Appendices (Statistical Tables)
Appendices A through E contain the z-table, t-table, chi-square table, F-table, and other reference distributions. On exam day, any required critical values will be provided in the question, so you don’t need to memorize these tables. However, familiarizing yourself with how to read them will speed up your practice problem work.
Study Strategy for Quantitative Methods
Start with LM 1–2 (returns and TVM) since these are prerequisites for virtually every other CFA topic. Spend extra time on LM 5 (portfolio math) and LM 7–8 (estimation and hypothesis testing) — these have the highest direct exam relevance. LM 11 (big data) can be studied lightly; it’s tested at a conceptual level only.
Practice with your calculator constantly. Quant is the one section where speed matters as much as knowledge — a TVM problem that takes 90 seconds instead of 30 costs you precious time across two exam sessions. See the full study plan for recommended time allocation.
Key Takeaways
- TVM and rates of return are the foundation — master them before touching any other topic.
- The money-weighted vs. time-weighted return distinction is a perennial exam favorite.
- The portfolio variance formula (with correlation) is critical here and in Portfolio Management.
- Hypothesis testing follows a mechanical five-step process — know it cold.
- Know when to use z vs. t statistics, and understand Type I vs. Type II errors.
- Regression: focus on interpreting output (slope, R², significance) rather than computation.
- Big data (LM 11) is conceptual only — don’t over-invest time here.
Frequently Asked Questions
Is CFA Level 1 Quantitative Methods hard?
It depends heavily on your background. If you have a quantitative degree (engineering, math, economics), most of this material will be review and you can compress your study time. If you’re coming from a non-quantitative background, budget extra time for hypothesis testing and regression. The concepts aren’t inherently difficult, but they require practice with a financial calculator.
Which calculator should I use for the CFA exam?
CFA Institute allows only the Texas Instruments BA II Plus (or Professional) and the Hewlett-Packard 12C (or Platinum). The BA II Plus is more popular among candidates for its intuitive TVM keys. Whichever you choose, practice extensively — you should be able to solve standard TVM problems without thinking about the keystrokes.
How many Quantitative Methods questions are on CFA Level 1?
At 6–9% weight across 180 questions, expect roughly 11–16 questions. They tend to be calculation-heavy (TVM, hypothesis testing, portfolio variance) with a few conceptual questions on probability, regression interpretation, and big data.
What’s the connection between Quant and other CFA Level 1 topics?
TVM underpins bond pricing (Fixed Income), equity valuation (Equity Investments), and derivative pricing (Derivatives). Portfolio math feeds directly into Portfolio Management. Statistics and hypothesis testing appear in Financial Reporting analysis. Study Quant first — the study plan puts it in Weeks 1–2 for this reason.
Do I need to know how to code for the Big Data module?
No. LM 11 is tested at a conceptual level only. You need to understand what machine learning, NLP, and big data techniques are and when they’re useful — not how to implement them. Focus on the distinction between supervised and unsupervised learning, overfitting vs. underfitting, and the role of data visualization in investment analysis.