Using sctransform in Seurat
Christoph Hafemeister & Rahul Satija
This vignette shows how to use the sctransform wrapper in Seurat.
Install sctransform and Seurat v3.
library(Seurat) library(ggplot2) library(sctransform)
Load data and create Seurat object
pbmc_data <- Read10X(data.dir = "../data/pbmc3k/filtered_gene_bc_matrices/hg19/") pbmc <- CreateSeuratObject(counts = pbmc_data)
Apply sctransform normalization
- Note that this single command replaces
- Transformed data will be available in the SCT assay, which is set as the default after running sctransform
- During normalization, we can also remove confounding sources of variation, for example, mitochondrial mapping percentage
# store mitochondrial percentage in object meta data pbmc <- PercentageFeatureSet(pbmc, pattern = "^MT-", col.name = "percent.mt") # run sctransform pbmc <- SCTransform(pbmc, vars.to.regress = "percent.mt", verbose = FALSE)
Perform dimensionality reduction by PCA and UMAP embedding
# These are now standard steps in the Seurat workflow for visualization and clustering pbmc <- RunPCA(pbmc, verbose = FALSE) pbmc <- RunUMAP(pbmc, dims = 1:30, verbose = FALSE) pbmc <- FindNeighbors(pbmc, dims = 1:30, verbose = FALSE) pbmc <- FindClusters(pbmc, verbose = FALSE) DimPlot(pbmc, label = TRUE) + NoLegend()
Why can we choose more PCs when using sctransform?
In the standard Seurat workflow we focus on 10 PCs for this dataset, though we highlight that the results are similar with higher settings for this parameter. Interestingtly, we've found that when using sctransform, we often benefit by pushing this parameter even higher. We believe this is because the sctransform workflow performs more effective normalization, strongly removing technical effects from the data.
Even after standard log-normalization, variation in sequencing depth is still a confounding factor (see Figure 1), and this effect can subtly influence higher PCs. In sctransform, this effect is substantially mitigated (see Figure 3). This means that higher PCs are more likely to represent subtle, but biologically relevant, sources of heterogeneity -- so including them may improve downstream analysis.
In addition, sctransform returns 3,000 variable features by default, instead of 2,000. The rationale is similar, the additional variable features are less likely to be driven by technical differences across cells, and instead may represent more subtle biological fluctuations. In general, we find that results produced with sctransform are less dependent on these parameters (indeed, we achieve nearly identical results when using all genes in the transcriptome, though this does reduce computational efficiency). This can help users generate more robust results, and in addition, enables the application of standard anlaysis pipelines with identical parameter settings that can quickly be applied to new datasets:
For example, the following code replicates the full end-to-end workflow, in a single command:
pbmc <- CreateSeuratObject(pbmc_data) %>% PercentageFeatureSet(pattern = "^MT-", col.name = "percent.mt") %>% SCTransform(vars.to.regress = "percent.mt") %>% RunPCA() %>% FindNeighbors(dims = 1:30) %>% RunUMAP(dims = 1:30) %>% FindClusters()
Where are normalized values stored for sctransform?
As described in our preprint, sctransform calculates a model of technical noise in scRNA-seq data using 'regularized negative binomial regression'. The residuals for this model are normalized values, and can be positive or negative. Positive residuals for a given gene in a given cell indicate that we observed more UMIs than expected given the gene’s average expression in the population and cellular sequencing depth, while negative residuals indicate the converse.
pbmc[["SCT"]]@scale.datacontains the residuals (normalized values), and is used directly as input to PCA. Please note that this matrix is non-sparse, and can therefore take up a lot of memory if stored for all genes. To save memory, we store these values only for variable genes, by setting the return.only.var.genes = TRUE by default in the
- To assist with visualization and interpretation. we also convert Pearson residuals back to ‘corrected’ UMI counts. You can interpret these as the UMI counts we would expect to observe if all cells were sequenced to the same depth. If you want to see exactly how we do this, please look at the correct function here.
- The 'corrected' UMI counts are stored in
pbmc[["SCT"]]@counts. We store log-normalized versions of these corrected counts in
pbmc[["SCT"]]@data, which are very helpful for visualization.
- You can use the corrected log-normalized counts for differential expression and integration. However, in principle, it would be most optimal to perform these calculations directly on the residuals (stored in the
scale.dataslot) themselves. This is not currently supported in Seurat v3, but will be soon.
Users can individually annotate clusters based on canonical markers. However, the sctransform normalization reveals sharper biological distinctions compared to the standard Seurat workflow, in a few ways:
- Clear separation of at least 3 CD8 T cell populations (naive, memory, effector), based on CD8A, GZMK, CCL5, GZMK expression
- Clear separation of three CD4 T cell populations (naive, memory, IFN-activated) based on S100A4, CCR7, IL32, and ISG15
- Additional developmental sub-structure in B cell cluster, based on TCL1A, FCER2
- Additional separation of NK cells into CD56dim vs. bright clusters, based on XCL1 and FCGR3A
# These are now standard steps in the Seurat workflow for visualization and clustering Visualize # canonical marker genes as violin plots. VlnPlot(pbmc, features = c("CD8A", "GZMK", "CCL5", "S100A4", "ANXA1", "CCR7", "ISG15", "CD3D"), pt.size = 0.2, ncol = 4)