variantspark v0.1.1

0

Monthly downloads

0th

Percentile

A 'Sparklyr' Extension for 'VariantSpark'

This is a 'sparklyr' extension integrating 'VariantSpark' and R. 'VariantSpark' is a framework based on 'scala' and 'spark' to analyze genome datasets, see <https://bioinformatics.csiro.au/>. It was tested on datasets with 3000 samples each one containing 80 million features in either unsupervised clustering approaches and supervised applications, like classification and regression. The genome datasets are usually writing in VCF, a specific text file format used in bioinformatics for storing gene sequence variations. So, 'VariantSpark' is a great tool for genome research, because it is able to read VCF files, run analyses and return the output in a 'spark' data frame.

Readme

A sparklyr extension for VariantSpark

VariantSpark is a framework based on scala and spark to analyze genome datasets. It is being developed by CSIRO Bioinformatics team in Australia. VariantSpark was tested on datasets with 3000 samples each one containing 80 million features in either unsupervised clustering approaches and supervised applications, like classification and regression.

The genome datasets are usually writing in Variant Call Format (VCF), a specific text file format used in bioinformatics for storing gene sequence variations. So, VariantSaprk is a great tool because it is able to read VCF files, run analyses and give us the output in a spark data frame.

This repo is an R package integrating R and VaraintSpark using the sparklyr. This way, you are able to analyze huge genomics datasets without leaving your well know R environment.

Installation

To upgrade to the latest version of variantspark, run the following command and restart your R session:

install.packages("devtools")
devtools::install_github("r-spark/variantspark")

Connect to Spark and VariantSpark

To use variantspark R package you need to create a VarianSpark connection, to do this, you have to pass a Spark connection as an argument.

library(sparklyr)
library(variantspark)

sc <- spark_connect(master = "local")
vsc <- vs_connect(sc)

Load datasets

VariantSpark can load VCF files and other formats too, like CSV for example.

hipster_vcf <- vs_read_vcf(vsc, "inst/extdata/hipster.vcf.bz2")
hipster_labels <- vs_read_csv(vsc, "inst/extdata/hipster_labels.txt")
labels <- vs_read_labels(vsc, "inst/extdata/hipster_labels.txt") # read just the label column

Importance analysis

This is one of VariantSpark application and this analysis was based on this. Briefly, VariantSpark uses Random Forest to assign an "Importance" score to each tested variant reflecting its association to the interest phenotype. A variant with higher "Importance" score implies it is more strongly associated with the phenotype of interest. For more details, please look at here. This is the way you can do it in R.

# calculate the "Importance"
importance <- vs_importance_analysis(vsc, hipster_vcf, labels, n_trees = 100)

# transform the output in a tibble spark
importance_tbl <- importance_tbl(importance)

Plot the results

You can use dplyr and ggplot2 to transform the output and plot!

library(dplyr)
library(ggplot2)

# save a importance sample in memory
importance_df <- importance_tbl %>% 
  arrange(-importance) %>% 
  head(20) %>% 
  collect()

# importance barplot
ggplot(importance_df) +
  aes(x = variable, y = importance) + 
  geom_bar(stat = 'identity') +          
  scale_x_discrete(limits = importance_df[order(importance_df$importance), 1]$variable) + 
  coord_flip()

Disconnect

Don't forget to disconnect your session when you finish your work.

spark_disconnect(sc)

Functions in variantspark

Name Description
vs_read_vcf Reading a VCF file
importance_tbl Extract the importance data frame
sample_names Display sample names
vs_connect Creating a variantspark connection
vs_importance_analysis Importance Analysis
vs_read_labels Reading labels
vs_read_csv Reading a CSV file
No Results!

Last month downloads

Details

Type Package
License Apache License 2.0 | file LICENSE
LazyData true
RoxygenNote 6.1.1
Encoding UTF-8
NeedsCompilation no
Packaged 2019-06-11 23:30:12 UTC; dmmad
Repository CRAN
Date/Publication 2019-06-13 16:20:03 UTC

Include our badge in your README

[![Rdoc](http://www.rdocumentation.org/badges/version/variantspark)](http://www.rdocumentation.org/packages/variantspark)