Learn R Programming

OmopOnSpark

OmopOnSpark provides a Spark specific implementation of an OMOP CDM reference as defined by the omopgenerics R package.

Installation

You can install the development version of OmopOnSpark from GitHub with:

# install.packages("devtools")
devtools::install_github("oxford-pharmacoepi/OmopOnSpark")

Creating a cdm reference using Sparklyr

Let’s first load the R libraries.

library(dplyr)
library(sparklyr)
library(OmopOnSpark)

To work with OmopOnSpark, we will first need to create a connection to our data using the sparklyr. In the example below, we have a schema called “omop” that contains all the OMOP CDM tables and then we have another schema where we can write results during the course of a study. We also set a write prefix so that all the tables we write start with this (which makes it easy to clean up afterwards and avoid any name conflicts with other users).

con <- sparklyr::spark_connect(.....)
cdm <- cdmFromSpark(con,
  cdmSchema = "omop",
  writeSchema = "results",
  writePrefix = "study_1_"
)

For this introduction we’ll use a mock cdm where we have a small synthetic dataset in a local spark database.

cdm <- mockSparkCdm(path = file.path(tempdir(), "temp_spark"))

Cross platform support

With our cdm reference created, we now a single object in R that represents our OMOP CDM data.

cdm

This object contains references to each of our tables

cdm$person |>
  dplyr::glimpse()

cdm$observation_period |>
  dplyr::glimpse()

With this we can use familiar dplyr code . For example, we can quickly get a count of our person table.

cdm$person |> 
  tally()

Behind the scenes, the dbplyr R package is translating this to SQL.

cdm$person |> 
  tally() |> 
  show_query()

We can also make use of various existing packages that work with a cdm reference using this approach. For example, we can extract a summary of missing data in our condition occurrence table using the OmopSketch package.

library(OmopSketch)
library(flextable)

snap <- summariseOmopSnapshot(cdm)
tableOmopSnapshot(snap, type = "tibble")

Native spark support

As well as making use of packages that provide cross-platform functionality with the cdm reference such as OmopSketch, because OmopOnSpark is built on top of the sparklyr package we can also make use of native spark queries. For example we can compute summary statistics on one of our cdm tables using spark functions.

cdm$person |>
  sdf_describe(cols = c(
    "gender_concept_id",
    "year_of_birth",
    "month_of_birth",
    "day_of_birth"
  ))

With this we are hopefully achieving the best of both worlds. On the one hand we can participate in network studies where code has been written in such a way to work across database platforms. And then on the other we are able to go beyond this approach, writing bespoke code that makes use of Spark-specific functionality.

Disconnecting from your spark connection

We can disconnect from our spark connection like so

cdmDisconnect(cdm)

Copy Link

Version

Install

install.packages('OmopOnSpark')

Monthly Downloads

160

Version

0.1.0

License

Apache License (>= 2)

Maintainer

Edward Burn

Last Published

October 22nd, 2025

Functions in OmopOnSpark (0.1.0)

reexports

Objects exported from other packages
cdmDisconnect.spark_cdm

Disconnect the connection of the cdm object
insertTable.spark_cdm

Insert a table to a cdm object
dropSourceTable.spark_cdm

Drop spark tables
cdmFromSpark

Create a cdm_reference object from a sparklyr connection.
createOmopTablesOnSpark

Create OMOP CDM tables
mockSparkCdm

creates a cdm reference to local spark OMOP CDM tables
OmopOnSpark-package

OmopOnSpark: Using a Common Data Model on 'Spark'