Getting started with wrProteo

Wolfgang Raffelsberger

2021-03-10

Introduction

This package contains a collection of various tools for Proteomics used at the proteomics platform of the IGBMC. To get started, we need to load the packages “wrMisc” and this package (wrProteo), both are available from CRAN. The packages wrGraph and RColorBrewer get used internally by some of the functions from this package for (optional/improved) figures. Furthermore, the Bioconductor package limma will be used internally for statistical testing.

If you are not familiar with R you may find many introductory documents on the official R-site in contributed documents or under Documentation/Manuals. Of course, numerous other documents/sites with tutorials exit.

The aim of package-vignettes is to provide additional information how the R-package concerned may be used, thus complementing the documentation given with help() for each of the functions of the package. In terms of examples, frequent standard types of problems are preferred in a vignette. Nevertheless, most functions can be used in many other ways, for this you may have to check the various arguments via calling help on the function of interest. All R-code in this vigentte can be directly repeated by the user, all data used is provided with the package.

## if you need to install the packages 'wrMisc','wrProteo' and 'wrGraph' from CRAN :
install.packages("wrMisc")
install.packages("wrProteo")
## The package 'wrGraph' is not obligatory, but it allows making better graphs
install.packages("wrGraph")

# Installation of limma 
if(!requireNamespace("BiocManager", quietly=TRUE)) install.packages("BiocManager")
BiocManager::install("limma")
## Get started by loading the packages
library("wrMisc")
library("wrProteo")
library("wrGraph")
# This is wrProteo version no :
packageVersion("wrProteo")
#> [1] '1.4.0'

This way you can browse all vignettes available to this package :

browseVignettes("wrProteo")

There you can find another vignette dedicated to the analysis of heterogenous spike-in experiments.

Calculating Molecular Masses From Composition Formulas

Please note that molecular masses may be given in two flavours : Monoisotopic mass and average mass. For details you may refer to Wikipedia: monoisotopic mass. Monoisotopic masses commonly are used in mass-spectrometry and will be used by default in this package.

Molecular (mono-isotopic) masses of the atomes integrated in this package were taken from Unimod. They can be easily updated, if in the future, (mono-isotopic) molecular masses will be determined with higher precision (ie more digits).

Molecular masses based on (summed) chemical formulas

At this level (summed) atomic compositions are evaluated. Here, the number of atoms has to be written before the atom. Thus, ‘2C’ means two atoms of carbon. Empty or invalid entries will be by default returned as mass=0, a message will comment such issues.

The mass of an electron can be assigned using ‘e’.

massDeFormula(c("12H12O", "HO", " 2H 1 Se, 6C 2N", "HSeCN", " ", "e"))
#>  -> massDeFormula :  can't find ' ' .. setting to 0 mass
#>       12H12O         1H1O  2H1Se1e6C2N  1H1Se1e1C1N                        1e 
#> 2.040329e+02 1.700274e+01 1.819389e+02 1.069280e+02 0.000000e+00 5.485799e-04

# Ignore empty/invalid entries
massDeFormula(c("12H12O", "HO", " 2H 1 Se, 6C 2N", "HSeCN"), rmEmpty=TRUE)
#>      12H12O        1H1O 2H1Se1e6C2N 1H1Se1e1C1N 
#>   204.03288    17.00274   181.93887   106.92797

Using the argument massTy one can switch from default monoisotopic mass to average mass :

massDeFormula(c("12H12O", "HO", " 2H 1 Se, 6C 2N", "HSeCN"), massTy="aver")
#>      12H12O        1H1O 2H1Se1e6C2N 1H1Se1e1C1N 
#>   204.08808    17.00734   181.05403   105.98589

Molecular masses based on amino-acid sequence

The mass of these amino-acids can be used:

AAmass()
#>         A         R         N         D         C         E         Q         G 
#>  71.03711 156.10111 114.04293 115.02694 103.00918 129.04259 128.05858  57.02146 
#>         H         I         L         K         M         F         P         S 
#> 137.05891 113.08406 113.08406 128.09496 131.04048 147.06841  97.05276  87.03203 
#>         T         W         Y         V 
#> 101.04768 186.07931 163.06333  99.06841

Here the one-letter amino-acid code is used to descibre peptides or proteins.

## mass of peptide (or protein)
pep1 <- c(aa="AAAA",de="DEFDEF")
convAASeq2mass(pep1, seqN=FALSE)
#>       aa       de 
#> 302.1590 800.2865

Reading Fasta Files (from Uniprot)

This package contains a parser for Fasta-files allowing to separate different fields of meta-data like IDs, Name and Species. Here we will read a tiny example fasta-file obtained from a collection of typical contaminants in proteomics.

path1 <- system.file('extdata', package='wrProteo')
fiNa <-  "conta1.fasta"
## basic reading of Fasta
fasta1 <- readFasta2(file.path(path1,fiNa))
str(fasta1)
#>  Named chr [1:35] "FPTDDDDKIVGGYTCAANSIPYQVSLNSGSHFCGGSLINSQWVVSAAHCYKSRIQVRLGEHNIDVLEGNEQFINAAKIITHPNFNGNTLDNDIMLIKLSSPATLNSRVATV"| __truncated__ ...
#>  - attr(*, "names")= chr [1:35] "TRYP_PIG TRYPSIN PRECURSOR" "TRY1_BOVIN Cationic trypsin (Fragment) - Bos taurus (Bovine)" "CTRA_BOVIN Chymotrypsinogen A - Bos taurus (Bovine)" "CTRB_BOVIN Chymotrypsinogen B - Bos taurus (Bovine)" ...

## now let's read and further separate details in annotation-fields
fasta1det <- readFasta2(file.path(path1,fiNa), tableOut=TRUE)
str(fasta1det)
#>  chr [1:35, 1:9] "generic" "generic" "generic" "generic" "generic" ...
#>  - attr(*, "dimnames")=List of 2
#>   ..$ : NULL
#>   ..$ : chr [1:9] "database" "uniqueIdentifier" "entryName" "proteinName" ...

.


Example : Analyzing Label-free Quantitative Proteomics Data

Label-free Quantitative Proteomics Introduction

Multiple algorithms and software implementations have been developed for quantitation label-free proteomics experiments (LFQ), in particular for extracted ion chromatograms (XIC). For more background information you may look at Wikipedia labell-free Proteomics.

The tools presented here are designed for use with label-free XIC (ie LFQ) data. Several of the programs for extracting initial quantitations also allow getting spectral counting (PSM) data which can also get imported into R, however their use is not further discussed in this vignette. In general it is preferable to use XIC for comparing peptde of protein quantities between different protein extracts/samples.

This package provides support for importing quantitation results from Proteome Discoverer, MaxQuant and Proline. All quantitation import functions offer special features for further separating annotation related information, like species, for later use.

In most common real-world cases people typically analyze data using only one quantitation algorithm/software. Below in this vignette, we’ll use only the quantitation data generated using MaxQuant (ProteomeDiscoverer, Proline and OpenMS are supported, too). The other vignette to this package (“UPS-1 spike-in Experiments”) shows in detail the import functions available for ProteomeDiscoverer and Proline and how further comparsions can be performed in bench-mark studies. All these import functions generate an equivalent output format, separating (selected) annotation data ($annot) from normalized log2-quantitation data ($quant) and initial quantitation ($raw).

Normalization (discussed below in more detail) is an important part of ‘preparing’ the data for subsequant analysis. The import functions in this package allow performin an initial normalization step (with choice among multiple algorithims), too. As further information about the proteins identifed can be considered during normalization it is, eg, possible to exclude contaminants. In proteomics experiments it is very common to find contaminants like keratins among the higher abundant proteins which may introduce bias at normalization.

Technical replicates are very frequently produced in proteomics, they allow to assess the variability linked to repeated injection of the same material. Biological replicates, however, make additional information accessible, allowing an interpretation of experiments in a more general way.

Read Data From MaxQuant

MaxQuant is free software provided by the Max-Planck-Institute, see Tyanova et al 2016. Typically MaxQuant exports by default quantitation data on level of consensus-proteins as a folder called txt with a file always called ‘proteinGroups.txt’. Data exported from MaxQuant can get imported (and normalized) using readMaxQuantFile(), in a standard case one needs only to provide the path to the file ‘proteinGroups.txt’ which can be found the combined/txt/ folder produced by MaxQuant. Gz-compressed files can be read, too (as in the example below the file ‘proteinGroups.txt.gz’).

path1 <- system.file("extdata", package="wrProteo")
dataMQ <- readMaxQuantFile(path1, specPref=NULL, normalizeMeth="median")
#>  -> readMaxQuantFile : Note: Found 11 proteins marked as 'REV_' (reverse peptide identification) - Removing
#>  -> readMaxQuantFile : Display 2845(9.5%) initial '0' values as 'NA'
#>  -> readMaxQuantFile :  Note: 6 proteins with unknown species
#>     by species : Homo sapiens: 49,  Mus musculus: 1,  Saccharomyces cerevisiae: 1047,  Sus scrofa: 1,
#>  -> readMaxQuantFile : Found 75 composite accession-numbers, truncating (eg P00761;CON__P00761)
#>  -> readMaxQuantFile : Use column 'Accession' as identifyer (has fewest, ie 0 duplicated entries) as rownames

The data were imported and median-normalized, the protein annotation was parsed, IDs, protein-names and species information automatically extracted.

## the number of lines and colums
dim(dataMQ$quant)
#> [1] 1104   27
## a summary of the quantitation data
summary(dataMQ$quant[,1:8])        # the first 8 cols
#>   12500amol_1     12500amol_2     12500amol_3      125amol_1    
#>  Min.   :17.50   Min.   :15.63   Min.   :14.89   Min.   :15.21  
#>  1st Qu.:22.49   1st Qu.:22.47   1st Qu.:22.48   1st Qu.:22.40  
#>  Median :23.45   Median :23.45   Median :23.45   Median :23.45  
#>  Mean   :23.67   Mean   :23.64   Mean   :23.65   Mean   :23.62  
#>  3rd Qu.:24.80   3rd Qu.:24.74   3rd Qu.:24.76   3rd Qu.:24.84  
#>  Max.   :30.29   Max.   :30.26   Max.   :30.33   Max.   :30.27  
#>  NA's   :98      NA's   :100     NA's   :104     NA's   :114    
#>    125amol_2       125amol_3      25000amol_1     25000amol_2   
#>  Min.   :14.88   Min.   :14.92   Min.   :15.73   Min.   :18.35  
#>  1st Qu.:22.39   1st Qu.:22.41   1st Qu.:22.44   1st Qu.:22.46  
#>  Median :23.45   Median :23.45   Median :23.45   Median :23.45  
#>  Mean   :23.62   Mean   :23.63   Mean   :23.65   Mean   :23.65  
#>  3rd Qu.:24.84   3rd Qu.:24.80   3rd Qu.:24.86   3rd Qu.:24.80  
#>  Max.   :30.28   Max.   :30.31   Max.   :30.20   Max.   :30.02  
#>  NA's   :106     NA's   :114     NA's   :109     NA's   :115

To treat the data from this experiment we also need to declare who is replicate of whom. In this case we’ll use the UPS1 concentrations used when making the serial dilution as names for the groups of replicates.

UPSconc <- c(50,125,250,500,2500,5000,12500,25000,50000)  
sampNa <- paste0(rep(UPSconc, each=3),"amol_",rep(1:3,length(UPSconc))) 
grp9 <- paste0(rep(UPSconc,each=3),"amol") 
head(grp9)
#> [1] "50amol"  "50amol"  "50amol"  "125amol" "125amol" "125amol"

Read Data From ProteomeDiscoverer

Proteome Discoverer is commercial software from ThermoFisher (www.thermofisher.com). Data exported from Proteome Discoverer can get imported (typically the xx_allProteins.txt file) using readProtDiscovFile(), for details please see the vignette “UPS-1 spike-in Experiments” to this package.

Read Data From Proline

Proline is free software provided by the Profi-consortium, see Bouyssie et al 2020. Data exported from Proline (xlsx or tsv format) can get imported using readProlineFile(), for details please see the vignette “UPS-1 spike-in Experiments” to this package.

Read Data From OpenMS

OpenMS is free software provided by the deNBI Center for integrative Bioinformatics, see Rost et al 2016. Data exported as csv get summarized from peptide to protein level and further normalized using readOpenMSFile(), for details please see the help-page to this function.

Normalization

As mentioned, the aim of normalization is to remove bias in data not linked to the original (biological) question. The import functions presented above do already by default run global median normalization. When choosing a normalization procedure one should reflect what additional information may be available to guide normalization. For example, it may be very useful to exclude classical protein contaminants since they typically do not reflect the original biolocial material. In overall, it is important to inspect results from normalization, graphical display of histograms, boxplots or violin-plots to compare distributions. Multiple options exist for normalizing data, please look at the documentation provided with the import-functions introduced above. Plese note, that enrichment experiments (like IP) can quickly pose major problems to the choice of normalization approaches. The function normalizeThis() from the package wrMisc can be used to run additional normalization, if needed.

Different normalization procedures intervene with different ‘aggressiveness’, ie also with different capacity to change the initial data. In general, it is suggested to start normalizing using ‘milder’ procedures, like global median and switch to more intervening methods if initial results seem not satisfactory. Beware, heavy normalization procedures may also alter the main information you want to analyze. Ie, some biologically true positive changes may start to fade or disappear when inappropriate normalization gets performed. Please note, that normalization should be performed before NA-imputation to avoid introducing new bias in the group of imputed values.

Imputation of NA-values

In proteomics the quantitation of very low abundances is very challenging. Proteins which are absent or very low abundances typically appear in the results as 0 or NA. Frequantly this may be linked to the fact that no peak is detected in a MS-1 chromatogram (for a given LC elution-time) while other samples had a strong peak at the respective place which led to successful MS-2 identification. Please note, that the match-between-runs option in the various softwar options allows to considerably reduce the number of NAs. To simplify the treatment all 0 values are transformed to NA, anyway they would not allow log2 transformation either.

Before replacing NA-values it is important to verify that such values may be associated to absent or very low abundances. To do so, we suggest to inspect groups of replicate-measurements using matrixNAinspect(). In particular, with multiple technical replicates of the same sample it is supposed that any variability observed is not linked to the sample itself. So for each NA that occurs in the data we suggest to look what was reported for the same protein with the other (technical) replicates. This brings us to the term of ‘NA-neighbours’ (quantifications for the same protein in replicates). When drawing histograms of NA-neighbours one can visually inspect and verify that NA-neighbours are typically low abundance values, however, but not necessarily the lowest values observed in the entire data-set.

## Let's inspect NA values as graphic
matrixNAinspect(dataMQ$quant, gr=grp9, tit="Data from MaxQuant") 
#>  -> stableMode :  method='density',  length of x =1142, 'bandw' has been set to 47

So only if the hypothesis of NA-neighbours as typically low abundance values gets confirmed by visual inspection of the histograms, one may safely proceed to replacing them by low random values.

If one uses a unique (very) low value for NA-replacements, this will quickly pose a problem at the level of t-tests to look for proteins changing abundance between two or more groups of samples. Therefore it is common practice to draw random values from a Normal distribution representing this lower end of abundance values. Nevertheless, the choice of the parameters of this Normal distribution is very delicate.

This package proposes several related strategies/options for NA-imputation. First, the classical imputation of NA-values using Normal distributed random data is presented. The mean value for the Normal data can be taken from the median or mode of the NA-neighbour values, since (in case of technical replicetes) NA-neighbours tell us what these values might have been and thus we model a distribution around. Later in this vignette, a more elaborate version based on repeated implementations to obtain more robust results will be presented.

The function matrixNAneighbourImpute() proposed in this package offers automatic selection of these parameters, which have been tested in a number of different projects. However, this choice should be checked by critically inspecting the histograms of ‘NA-neighbours’ (ie successful quantitation in other replicate samples of the same protein) and the final resulting distribution. Initially all NA-neighbours are extracted. It is also worth mentioning that in the majority of data-sets encountered, such NA-neighbours form skewed distributions.

The successful quantitation of instances with more than one NA-values per group may be considered even more representative, but of course less sucessfully quntified values remain. Thus a primary choice is made: If the selection of (min) 2 NA-values per group has more than 300 values, this distribution will be used as base to model the distribution for drawing random values. In this case, by default the 0.18 quantile of the 2 NA-neighbour distribution will be used as mean for the new Normal distribution used for NA-replacements. If the number of 2 NA-neighbours is >= 300, (by default) the 0.1 quantile all NA-neighbour values will used. Of course, the user has also the possibility to use custom choices for these parameters.

The final replacement is done on all NA values. This also includes proteins with are all NA in a given condition as well a instances of mixed successful quantitation and NA values.

## MaxQuant simple NA-imputation (single round)
dataMQimp <- matrixNAneighbourImpute(dataMQ$quant, gr=grp9, tit="Example from MaxQuant") 
#> -> matrixNAneighbourImpute -> stableMode :  method='density',  length of x =1142, 'bandw' has been set to 47
#>  -> matrixNAneighbourImpute :  n.woNA= 26963 , n.NA = 2845
#>     impute based on 'mode2' using mean= 21.42 and sd= 0.5
#> 

However, imputing using normal distributed random data also brings the risk of occasional extreme values. In the extreme case it may happen that a given protein is all NA in one group, and by chance the random values turn out be rather high. Then, the final group mean of imputed values may be even higher than the mean of another group with successful quantitations. Of course in this case it would be a bad interpretation to consider the protein in question upregulated in a sample where all values for this protein were NA. To circumvent this problem there are 2 options : 1) one may use special filtering schemes to exclude such constellations from final results or 2) one could repeat replacement of NA-values numerous times.

Filtering can be performed using presenceFilt (package wrMisc).

The function testRobustToNAimputation() allows such repeated replacement of NA-values. For details, see also the following section.

Statistical Testing

The t-test remains the main statistical test used, as in many other coases of omics, too. Statistical testing in the context of proteomics data poses challenges similar to transcriptomics : Many times the number of replicate-samples is fairly low and the inter-measurement variability quite high. In some unfortunate cases proteins with rather constant quantities may appear as false positives when searching for proteins who’s abundance changes between two groups of samples : If the apparent variability is by chance too low, the respective standard-deviations will be low and a plain t-test may give very enthusiastic p-values.
Next to stringent filtering (previous section of this vignette) the shrinkage when estimating the intra-group/replicate variance from the Bioconductor package limma turns out very helpful, see Ritchie et al 2015. In this package the function eBayes() has been used and adopted to proteomics.

The function testRobustToNAimputation() allows running multiple cycles of NA-imputation and statistical testing with the aim of providing stable imputation and testing results. It performs NA-imputation and statistical testing (after repeated imputation) between all groups of samples the same time (as it would be inefficient to separate these two tasks). The tests underneath apply shrinkage from the empirical Bayes procedure from the bioconductor package limma. In addition, various formats of multiple test correction can be directly added to the results : Benjamini-Hochberg FDR, local false discovery rate (lfdr, using the package fdrtool, see Strimmer 2008 doi: 10.1093/bioinformatics/btn209), or modified testing by ROTS, etc …

The fact that a single round of NA-imputation may provoke false positives as well as false negatives, made it necessary to combine this (iterative) process of NA-imputation and subsequent testing in one single function.

## Impute NA-values repeatedly and run statistical testing after each round of imputations
testMQ <- testRobustToNAimputation(dataMQ, gr=grp9) 
#> -> testRobustToNAimputation -> matrixNAneighbourImpute :  invalid entry for argument 'seedNo', it may be single integer or NULL, setting to NULL
#> -> testRobustToNAimputation -> matrixNAneighbourImpute -> stableMode :  method='density',  length of x =1142, 'bandw' has been set to 47
#> -> testRobustToNAimputation -> matrixNAneighbourImpute :  n.woNA= 26963 , n.NA = 2845
#>     impute based on 'mode2' using mean= 21.42 and sd= 0.5
#> 
#> -> testRobustToNAimputation -> combineMultFilterNAimput :     at presenceFilt:   1050 1049 1047 1030 1051 1037 1050 1022 1040 1046 1034 1050 1039 1027 1047 1061 1022 1060 1061 1032 1066 1056 1017 1045 1048 1019 1051 1028 1040 1051 1032 1027 1043 1027 1067 1051   out of  1104 
#> -> testRobustToNAimputation -> combineMultFilterNAimput :     at abundanceFilt:  1044 1041 1042 1021 1037 1030 1043 1008 1033 1036 1029 1045 1034 1022 1043 1054 1016 1053 1051 1025 1060 1048 1001 1038 1032 1001 1047 1018 1037 1039 1030 1018 1034 1024 1063 1035
#> -> testRobustToNAimputation -> combineMultFilterNAimput :    at NA> mean:   948, 977, 938, 972, 954, 974, 951, 966, 942, 960, 973, 938, 964, 960, 938, 975, 944, 968, 977, 947, 962, 953, 956, 945, 960, 966, 941, 944, 943, 967, 934, 943, 960, 927, 942 and 955

## the data after repeated NA-imputation
head(testMQ$datImp[,1:8])
#>        12500amol_1 12500amol_2 12500amol_3 125amol_1 125amol_2 125amol_3
#> A5Z2X5    23.72700    23.71202    23.65306  23.76069  24.03258  23.70469
#> P00761    28.83921    28.78712    28.86386  28.70094  28.70978  28.76617
#> P01966    21.15232    21.01268    20.95126  21.53511  21.46770  21.54617
#> P02768    24.65579    24.20806    24.56327  15.69777  16.19941  14.92077
#> P04264    21.48426    21.55433    21.46862  21.39359  21.44080  21.25315
#> P13645    21.48474    21.50431    21.54231  21.54880  21.43453  21.58591
#>        25000amol_1 25000amol_2
#> A5Z2X5    23.78891    23.73787
#> P00761    28.75811    28.67850
#> P01966    21.65894    22.21233
#> P02768    25.66044    25.59727
#> P04264    21.39693    21.48447
#> P13645    21.47122    21.53478

PCA

Brielfy, principal components analysis (PCA) searches to decompose the data along all the axises defined by all samples. Then, the axis-combinations with the highest degree of correlation are searched. In principle one could also run PCA along the rows, ie the proteins, but their number is typically so high that the resultant plots get too crowded.

In the context of high throughput experiments, like proteomics, PCA allows to distinguish important information how the different samples are related (ie similar). This covers of course the real differences between different biological conditions, but also additional bias introduced as (technical) artifacts. Thus, such plots serve as well for quality control (in particular to identify outlyer-samples, eg due to degraded material) as well as for the biological interpretation.

# limit to UPS1 
plotPCAw(testMQ$datImp, sampleGrp=grp9, tit="PCA on MaxQuant (NAs imputed)", rowTyName="proteins", useSymb2=0)

Please note, the vignette dedicated to spike-in experiments (“UPS-1 spike-in Experiments”) presents a slightly different way how to make PCA-plots for this specific type of experiment/data-set.

Volcano-Plot

A Volcano-plot allows to compare the simple fold-change (FC) opposed to the outcome of a statistcal test. Frequently we can obsereve, that a some proteins show very small FC but enthousiastic p-values and subsequently enthousiastic FDR-values. However, generally such proteins with so small FC don’t get considered as reliable results, therefore it is common practice to add an additional FC-threshold, typically a 1.5 or 2 fold-change.

The number of proteins retained by pair-wise comparison :

## by default the first pairwise comparison is taken
## using the argument 'namesNBest' we can add names from the annotation
VolcanoPlotW2(testMQ, namesNBest="passThr")
#>  -> VolcanoPlotW2 : Using element 'BH' as FDR-values for plot
#>  -> VolcanoPlotW2 : Successfully extracted  1104 Mvalues and  1104 pValues plus anotation
#>  -> VolcanoPlotW2 :  filtered (based on 'filtFin') from 1104 to  948 lines

Additional Note : Vlcano-plots may also help identifying bias in the data, in particular, to the question if normalization gave satisfactory results. Based on the hypothesis of no global change used for normalization, normally, one would expect about the same number of down-regulated as up-regulated proteins.

In fact, this experiment is somehow unusual since one set of samples got a strong increase in abundance for 48 UPS1 proteins while the other proteins remained constant. Thus, on the global scale there may be a (small) imbalance of abundances and the global median will reflect this, which can create some bias. So, in this special case it might be better to perform normalization only based on the yeast proteins (which are assumed as constant), as it has been performed in the vignette ‘UPS-1 spike-in Experiments’, a vignette which is entirely dedicated to UPS1 spike-in experiments.

Reporting Results

Tables with results can be either directed created using VolcanoPlotW() or, as shown below, using extractTestingResults().

For this example the moderated t-test expressed as Benjamini-Hochberg FDR gave 99 proteins with FDR < 0.05 (however, many of them with low FC) for the comparison 12500amol-125amol. So one should add an additional filter for too low FC values.

res1 <- extractTestingResults(testMQ, compNo=1, thrsh=0.05, FCthrs=2)

After FC-filtering for 2-fold (ie change to double or half) 9 proteins remain.

knitr::kable(res1[,-1], caption="5%-FDR (BH) Significant results for 1st pairwise set", align="c")
5%-FDR (BH) Significant results for 1st pairwise set
EntryName GeneName BH logFC av.12500amol av.125amol
P02768 ALBU_HUMAN_UPS ALB 0.0000006 -7.23269 22.8387 15.6060
P12081 SYHC_HUMAN_UPS HARS 0.0000408 -1.65047 21.7754 20.1249
P40518 ARPC5_YEAST ARC15 0.0001274 1.38588 22.4502 23.8360
Q06830 PRDX1_HUMAN_UPS PRDX1 0.0001781 -2.60795 21.9227 19.3147
P02144 MYG_HUMAN_UPS MB 0.0027879 -2.70638 22.2459 19.5395
P00441 SODC_HUMAN_UPS SOD1 0.0056780 -1.82887 22.5683 20.7394
P16083 NQO2_HUMAN_UPS NQO2 0.0123011 -4.94696 21.5930 16.6461
Q03677 MTND_YEAST ADI1 0.0146411 1.42144 21.2750 22.6964
P06732 KCRM_HUMAN_UPS CKM 0.0373721 -4.66813 21.5811 16.9129

Please note that the column-name ‘BH’ referrs to Benjamini-Hochberg FDR (several other options of multiple testing correction exist, too). We can see that many UPS1 proteins are, as expected, among the best-ranking differential changes. However, not all UPS1 proteins do show up in the results as expected, and furthermore, a number of yeast proteins (however expected to remain constant !) were reported as differential, too.

The function extractTestingResults() also allows to write the data shown above directly to a csv-file.

Further Steps

In case of standard projects one typically would like to find out more about the biological context of the proteins retained at statistical analysis, their function and their interactors. Such a list of significant proteins from a given project could be tested lateron for enrichment of GO-functions or for their inter-connectivity in PPI networks like String. There are multiple tools available on Bioconductor and CRAN as well as outside of R to perform such analysis tasks.

In case of UPS1 spike-in experiments the subsequent analysis is different. Suggestions for in depth-analysis of UPS1 spike-in are shown in the vignette ‘UPS-1 spike-in Experiments’ of this package.

.


Protein Annotation

In most ‘Omics’ acitivities getting additional annotation may get a bit tricky. In Proteomics most mass-spectrometry software will use the informaton provided in the Fasta-file as annotation (typically as provided from UniProt). But this lacks for example chromosomal location information. There are are many repositories with genome-, gene- and protein-annotation and most of them are linked, but sometimes the links get broken when data-base updates are not done everywhere or are not followed by new re-matching. The Fasta-files used initially for mass-spectrometry peak-identification may be (slightly) not up to date (sometimes gene- or protein-IDs do change or may even disappear) and thus will contribute to a certain percentage of entries hard to link.

Globally two families of strategies for adding annotation exist :

  1. Strategies using online-based ressources for getting the most up-to-date information/annotation. Depite the advantage of most up-to-date information there may be some downsides : This may require quite some time to run all queries via interet-connections and this strategy is vulnerable to broken links (eg linked to the delay of updates between different types of databases that may need to get combined). Furthermore, the results typically change a bit when the queries get repeated. When combining multiple interconnected ressources it may be very difficult to document the precise version of all individual ressources used.

  2. Strategies based on using (local copies of) defined versions of databases. Frequently, these databases can get downloaded/installed locally and thus allow faster queries and guarantee of repeatability and comparability to other tools or studies. Despite the disadvantage, that some information might be slightly not up-to-date the ability to completely control the query process and repeatability are reasonable arguments.

In the context of adding chromosomal annotation to a list of proteins here the following concept is developed : Annotation-tables from UCSC are available for a good number of species and can be downloaded for conventient off-line search. Howwever, in the context of less common species we realized that the UniProt tables from UCSC had many times low yield in final matching. For this reason we propose the slightly more complicated route that provided finally a much higher success-rate to find chromosomal locations for a list of UniProt IDs. First one needs to download from UCSC the table corresponding to the species of question fields ‘clade’,‘genome’,‘assembly’). For ‘group’ choose ‘Genes and Gene Predictions’ and for ‘track’ choose ‘Ensembl Genes’, as table choose ‘ensGene’. In addition, it is possible to select either the entire genome-annotation or user-specified regions. In terms of ‘output-format’ one may choose ‘GTF’ (slightly more condensed, no headers) or ‘all filds from selected table’.

The strategy for adding genomic location data presented here :

Locate & download organism annotation from UCSC, read into R (readUCSCtable() ) -> from R export (non-redundant) ‘enst’-IDs (still readUCSCtable() ), get corresponding UniProt-IDs at UniProt site, save as file and import result into R (readUniProtExport() ) -> (in R) combine with initial UCSC table (readUniProtExport() ) .

The function readUCSCtable() is able to read such files downloaded from UCSC, compressed .gz files can be read, too (like in the example below). In the example below we’ll just look at chromosome 11 of the human genome - to keep this example small.

path1 <- system.file("extdata", package="wrProteo")
gtfFi <- file.path(path1, "UCSC_hg38_chr11extr.gtf.gz")
UcscAnnot1 <- readUCSCtable(gtfFi)
#>  -> readUCSCtable :  File 'UCSC_hg38_chr11extr.gtf.gz' is gtf : TRUE
#>  -> readUCSCtable :  simplifed from 2000 to 223 non-redundant gene_id

# The Ensemble transcript identifyers and their chromosomal locations :
head(UcscAnnot1)
#>      gene_id              chr     start    end      strand frame
#> [1,] "ENST00000270115.8"  "chr11" "537527" "554912" "+"    "."  
#> [2,] "ENST00000308020.6"  "chr11" "450279" "491399" "+"    "."  
#> [3,] "ENST00000311189.8"  "chr11" "532242" "535576" "-"    "."  
#> [4,] "ENST00000312165.5"  "chr11" "278570" "285304" "+"    "."  
#> [5,] "ENST00000325113.8"  "chr11" "196738" "200258" "+"    "."  
#> [6,] "ENST00000325147.13" "chr11" "202924" "207382" "-"    "."

However, this annotation does not provide protein IDs. In order to obtain the corresponding protein IDs an additional step is required : Here we will use the batch search/conversion tool from UniProt. In order to do so, we can export directly from readUCSCtable() a small text-file which can be fed into the UniProt batch-search tool.

# Here we'll redo reading the UCSC table, plus immediatley write the file for UniProt conversion 
#  (in this vignette we write to tempdir() to keep things tidy)
expFi <- file.path(tempdir(),"deUcscForUniProt2.txt")
UcscAnnot1 <- readUCSCtable(gtfFi, exportFileNa=expFi)
#>  -> readUCSCtable :  File 'UCSC_hg38_chr11extr.gtf.gz' is gtf : TRUE
#>  -> readUCSCtable :  simplifed from 2000 to 223 non-redundant gene_id
#>  -> readUCSCtable :  Write to file : ENST00000270115, ENST00000308020, ENST00000311189, ENST00000312165 ...
#>  -> readUCSCtable :  Exporting file  'C:/Users/wraff/AppData/Local/Temp/RtmpwvQ1IW/deUcscForUniProt2.txt'  for conversion on https://www.uniprot.org/uploadlists

Now everything is ready to go to UniProt for retrieving the corresponding UniProt-IDs. Since we exported Ensemble transcript IDs (ENSTxxx), select converting from ‘Ensembl Transcript’ to ‘UniProtKB’. Then, when downloading the conversion results, choose tab-separated file format (compression is recommended), this may take several seconds (depending on the size).

It is suggested to rename the downloaded file so one can easily understand its content. Note, that the function readUniProtExport() can also read .gz compressed files. To continue this vignette we’ll use a result which has been downloaded from UniProt and renamed to ‘deUniProt_hg38chr11extr.tab’. One may also optionally define a specific genomic region of interest using the argument ‘targRegion’, here the entire chromosome 11 was chosen.

deUniProtFi <- file.path(path1, "deUniProt_hg38chr11extr.tab")
deUniPr1 <- readUniProtExport(UniP=deUniProtFi, deUcsc=UcscAnnot1, targRegion="chr11:1-135,086,622")
#>  -> readUniProtExport :  low yield matching 'enst00000410108', 'enst00000342878' and 'enst00000325113,enst00000525282' and 'ENST00000270115', 'ENST00000308020' and 'ENST00000311189' convert all to lower case and remove version numbers ('xxx.2') for better matching
#>  -> readUniProtExport :  intergrating genomic information for 88 entries (14 not found)
#>  -> readUniProtExport :  88 IDs in output
str(deUniPr1)
#> 'data.frame':    88 obs. of  14 variables:
#>  $ EnsTraID     : chr  "enst00000410108" "enst00000342878" "enst00000325113,enst00000525282" "enst00000342593" ...
#>  $ UniProtID    : chr  "B8ZZS0" "Q8TD33" "Q96PU9" "F8W6Z3" ...
#>  $ Entry.name   : chr  "B8ZZS0_HUMAN" "SG1C1_HUMAN" "ODF3A_HUMAN" "F8W6Z3_HUMAN" ...
#>  $ Status       : chr  "unreviewed" "reviewed" "reviewed" "unreviewed" ...
#>  $ Protein.names: chr  "BET1-like protein" "Secretoglobin family 1C member 1 (Secretoglobin RYD5)" "Outer dense fiber protein 3 (Outer dense fiber of sperm tails protein 3) (Sperm tail protein SHIPPO 1) (Transcr"| __truncated__ "Outer dense fiber protein 3" ...
#>  $ Gene.names   : chr  "BET1L" "SCGB1C1" "ODF3 SHIPPO1 TISP50" "ODF3" ...
#>  $ Length       : int  152 95 254 127 102 111 92 58 88 148 ...
#>  $ chr          : chr  "chr11" "chr11" NA "chr11" ...
#>  $ start        : num  167784 193078 NA 197303 202924 ...
#>  $ end          : num  207383 194575 NA 200033 207382 ...
#>  $ strand       : chr  "-" "+" NA "+" ...
#>  $ frame        : chr  "." "." NA "." ...
#>  $ avPos        : num  187584 193826 NA 198668 205153 ...
#>  $ inTarg       : logi  FALSE FALSE NA FALSE FALSE FALSE ...

The resulting data.frame (ie the column ‘UniProtID’) may be used to complement protein annotation after importing mass-spectrometry peak- and protein-identification results. Obviously, using recent Fasta-files from UniProt for protein-identification will typically give better matching at the end.

You may note that sometimes Ensemble transcript IDs are written as ‘enst00000410108’ whereas at other places it may be written as ‘ENST00000410108.5’. The function readUniProtExport() switches to a more flexible search mode stripping of version-numbers and reading all as lower-caps, if initial direct matching reveals less than 4 hits.

Finally, it should be added, that of course several other ways of retrieving annotation exist, in particular, using the annotation-packages provided by Bioconductor.

Acknowledgements

The author would like to acknowledge the support by the IGBMC (CNRS UMR 7104, Inserm U 1258), CNRS, Universite de Strasbourg and Inserm. All collegues from the proteomics platform at the IGBMC work very commited to provide high quality mass-spectrometry data (including some of those used here). The author wishes to thank the CRAN-staff for all their help with new entries and their efforts in maintaining this repository of R-packages. Furthermore, many very fruitful discussions with colleages on national and international level have helped to improve the tools presented here.

Session-Info

For completeness :

#> R version 4.0.4 (2021-02-15)
#> Platform: x86_64-w64-mingw32/x64 (64-bit)
#> Running under: Windows 10 x64 (build 19042)
#> 
#> Matrix products: default
#> 
#> locale:
#> [1] LC_COLLATE=C                   LC_CTYPE=French_France.1252   
#> [3] LC_MONETARY=French_France.1252 LC_NUMERIC=C                  
#> [5] LC_TIME=French_France.1252    
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] rmarkdown_2.7  knitr_1.31     wrGraph_1.2.2  wrProteo_1.4.0 wrMisc_1.5.3  
#> 
#> loaded via a namespace (and not attached):
#>  [1] magrittr_2.0.1     fdrtool_1.2.16     R6_2.5.0           rlang_0.4.10      
#>  [5] stringr_1.4.0      highr_0.8          tcltk_4.0.4        tools_4.0.4       
#>  [9] xfun_0.21          R.oo_1.24.0        jquerylib_0.1.3    htmltools_0.5.1.1 
#> [13] yaml_2.2.1         digest_0.6.27      RColorBrewer_1.1-2 sass_0.3.1        
#> [17] R.utils_2.10.1     sm_2.2-5.6         evaluate_0.14      limma_3.46.0      
#> [21] stringi_1.5.3      compiler_4.0.4     bslib_0.2.4        R.methodsS3_1.8.1 
#> [25] jsonlite_1.7.2