sparktf v0.1.0

0

Monthly downloads

0th

Percentile

Interface for 'TensorFlow' 'TFRecord' Files with 'Apache Spark'

A 'sparklyr' extension that enables reading and writing 'TensorFlow' TFRecord files via 'Apache Spark'.

Readme

sparktf

Travis build
status

Overview

sparktf is a sparklyr extension that allows writing of Spark DataFrames to TFRecord, the recommended format for persisting data to be used in training with TensorFlow.

Installation

You can install sparktf from CRAN with:

install.packages("sparktf")

You can install the development version of sparktf from GitHub with:

devtools::install_github("rstudio/sparktf")

Example

We first attach the required packages and establish a Spark connection.

library(sparktf)
library(sparklyr)
library(keras)
use_implementation("tensorflow")
library(tensorflow)
tfe_enable_eager_execution()
library(tfdatasets)

sc <- spark_connect(master = "local")

Copied a sample dataset to Spark then write it to disk via spark_write_tfrecord().

data_path <- file.path(tempdir(), "iris")
iris_tbl <- sdf_copy_to(sc, iris)

iris_tbl %>%
  ft_string_indexer_model(
    "Species", "label",
    labels = c("setosa", "versicolor", "virginica")
  ) %>%
  spark_write_tfrecord(
    path = data_path,
    write_locality = "local"
  )

We now read the saved TFRecord file and parse the contents to create a dataset object. For details, refer to the package website for tfdatasets.

dataset <- tfrecord_dataset(list.files(data_path, full.names = TRUE)) %>%
  dataset_map(function(example_proto) {
    features <- list(
      label = tf$FixedLenFeature(shape(), tf$float32),
      Sepal_Length = tf$FixedLenFeature(shape(), tf$float32),
      Sepal_Width = tf$FixedLenFeature(shape(), tf$float32),
      Petal_Length = tf$FixedLenFeature(shape(), tf$float32),
      Petal_Width = tf$FixedLenFeature(shape(), tf$float32)
    )

    features <- tf$parse_single_example(example_proto, features)
    x <- list(
      features$Sepal_Length, features$Sepal_Width,
      features$Petal_Length, features$Petal_Width
      )
    y <- tf$one_hot(tf$cast(features$label, tf$int32), 3L)
    list(x, y)
  }) %>%
  dataset_shuffle(150) %>%
  dataset_batch(16)

Now, we can define a Keras model using the keras package and fit it by feeding the dataset object defined above.

model <- keras_model_sequential() %>%
  layer_dense(32, activation = "relu", input_shape = 4) %>%
  layer_dense(3, activation = "softmax")

model %>%
  compile(loss = "categorical_crossentropy", optimizer = tf$train$AdamOptimizer())

history <- model %>%
  fit(dataset, epochs = 100, verbose = 0)

Finally, we can use the trained model to make some predictions.

new_data <- tf$constant(c(4.9, 3.2, 1.4, 0.2), shape = c(1, 4))
model(new_data)
#> tf.Tensor([[0.69612664 0.13773003 0.1661433 ]], shape=(1, 3), dtype=float32)

Functions in sparktf

Name Description
spark_read_tfrecord Read a TFRecord File
spark_write_tfrecord Write a Spark DataFrame to a TFRecord file
No Results!

Last month downloads

Details

Type Package
License Apache License (>= 2.0)
Encoding UTF-8
SystemRequirements TensorFlow (https://www.tensorflow.org/)
LazyData true
RoxygenNote 6.1.0
NeedsCompilation no
Packaged 2019-02-26 22:12:47 UTC; kevinykuo
Repository CRAN
Date/Publication 2019-03-05 14:30:03 UTC
suggests dplyr , testthat
depends R (>= 3.1.2)
imports sparklyr (>= 1.0)
Contributors

Include our badge in your README

[![Rdoc](http://www.rdocumentation.org/badges/version/sparktf)](http://www.rdocumentation.org/packages/sparktf)