Assignment by reference
Fast add, remove and modify subsets of columns, by reference.
LHS := RHS # in j only i.e. DT[i, LHS:=RHS, by] set(x, i=NULL, j, value)
- A single column name. Or, when
with=FALSE, a vector of column names or numeric positions (or a variable that evaluates as such). If the column doesn't exist, it is added, by reference.
- A vector of replacement values. It is recycled in the usual way to fill the number of rows satisfying
i, if any. Or, when
listof replacement vectors which are applied (the
- Optional. Integer row numbers to be assigned
NULLrepresents all rows more efficiently than creating a vector such as
- Integer column number to be assigned
- Value to assign by reference to
:= is defined for use in
j only. This syntax updates the column(s) by reference. It makes no copies of any part of memory at all. Typical usages are :
The following all result in a friendly error (by design) :
x := 1L # friendly error
DT[i,colname] := value # friendly error
DT[i]$colname := value # friendly error
j can be combined with all types of
i (such as binary search), and all types of
by. This a one reason why
:= has been implemented in
j. See FAQ 2.16 for analogies to SQL.
LHS is a factor column and
RHS is a character vector with items missing from the factor levels, the new level(s) are automatically added (by reference, efficiently), unlike base methods.
data.frame, the (potentially large) LHS is not coerced to match the type of the (often small) RHS. Instead the RHS is coerced to match the type of the LHS, if necessary. Where this involves double precision values being coerced to an integer column, a warning is given (whether or not fractional data is truncated). The motivation for this is efficiency. It is best to get the column types correct up front and stick to them. Changing a column type is possible but deliberately harder: provide a whole column as the RHS. This RHS is then plonked into that column slot and we call this plonk syntax, or replace column syntax if you prefer. By needing to construct a full length vector of a new type, you as the user are more aware of what is happening, and it's clearer to readers of your code that you really do intend to change the column type.
data.tables are not copied-on-change by
setkey or any of the other
set* functions. See
Additional resources: search for "
:=" in the
:=), search Stack Overflow's
truelength. By defining
j we believe update synax is natural, and scales, but also it bypasses
[<- dispatch via
*tmp* and allows
:= to update by reference with no copies of any part of memory at all.
[.data.table incurs overhead to check the existence and type of arguments (for example),
set() provides direct (but less flexible) assignment by reference with low overhead, appropriate for use inside a
for loop. See examples.
:= is more flexible than
:= is intended to be combined with
by in single queries on large datasets.
DTis modified by reference and the new value is returned. If you require a copy, take a copy first (using
DT2=copy(DT)). Recall that this package is for large data (of mixed column types, with multi-column keys) where updates by reference can be many orders of magnitude faster than copying the entire table.
DT = data.table(a=LETTERS[c(1,1:3)],b=4:7,key="a") DT[,c:=8] # add a numeric column, 8 for all rows DT[,d:=9L] # add an integer column, 9L for all rows DT[,c:=NULL] # remove column c DT[2,d:=10L] # subassign by reference to column d DT # DT changed by reference DT[b>4,b:=d*2L] # subassign to b using d, where b>4 DT["A",b:=0L] # binary search for group "A" and set column b DT[,e:=mean(d),by=a] # add new column by group by reference DT["B",f:=mean(d)] # subassign to new column, NA initialized # Speed example ... m = matrix(1,nrow=100000,ncol=100) DF = as.data.frame(m) DT = as.data.table(m) system.time(for (i in 1:1000) DF[i,1] <- i) # 591 seconds system.time(for (i in 1:1000) DT[i,V1:=i]) # 2.4 seconds ( 246 times faster, 2.4 is overhead in [.data.table ) system.time(for (i in 1:1000) set(DT,i,1L,i)) # 0.03 seconds ( 19700 times faster, overhead of [.data.table is avoided ) # However, normally, we call [.data.table *once* on *large* data, not many times on small data. # The above is to demonstrate overhead, not to recommend looping in this way. But the option # of set() is there if you need it.