在Python和PySpark中等效的R data.table滚动连接
作者:互联网
有谁知道如何在PySpark中进行R data.table滚动连接?
借用Ben here的滚动连接的例子和很好的解释;
sales<-data.table(saleID=c("S1","S2","S3","S4","S5"),
saleDate=as.Date(c("2014-2-20","2014-5-1","2014-6-15","2014-7- 1","2014-12-31")))
commercials<-data.table(commercialID=c("C1","C2","C3","C4"),
commercialDate=as.Date(c("2014-1-1","2014-4-1","2014-7-1","2014-9-15")))
setkey(sales,"saleDate")
setkey(commercials,"commercialDate")
sales[commercials, roll=TRUE]
结果是;
saleDate saleID commercialID
1: 2014-01-01 NA C1
2: 2014-04-01 S1 C2
3: 2014-07-01 S4 C3
4: 2014-09-15 S4 C4
非常感谢您的帮助.
解决方法:
滚动连接不是连接fillna
首先,滚动连接与连接和fillna不同!只有当连接的表的键(在data.table方面,即左表和右连接)在主表中具有等价物时才会出现这种情况. data.table滚动连接不需要这样.
据我所知,没有直接的等价物,我搜索了很长一段时间.它甚至有一个问题https://github.com/pandas-dev/pandas/issues/7546.但是:
大熊猫解决方案:
大熊猫有一个解决方案.我们假设您的右侧data.table是表A,而您的左侧data.table是表B.
>按键对表A和B进行排序.
>向A添加一个全部为0的列标签和一个全部为1的B列标签.
>删除除键和标签B之外的所有列(可以省略,但这样更清楚)并调用表B’.保持B作为原创 – 我们将在以后需要它.
>将A与B’连接到C并忽略B’中的行有很多NA的事实.
>按键排序C.
>使用C = C.assign创建一个新的cumsum列(groupNr = np.cumsum(C.tag))
>对标签使用过滤(查询)去除所有B’行.
>将运行计数器列groupNr添加到原始B(整数从0到N-1或从1到N,具体取决于您是要进行前向还是后向滚动连接).
>在groupNr上加入B和C.
编程代码
#0. 'date' is the key for the rolling join. It does not have to be a date.
A = pd.DataFrame.from_dict(
{'date': pd.to_datetime(["2014-3-1", "2014-5-1", "2014-6-1", "2014-7-1", "2014-12-1"]),
'value': ["a1", "a2", "a3", "a4", "a5"]})
B = pd.DataFrame.from_dict(
{'date': pd.to_datetime(["2014-1-15", "2014-3-15", "2014-6-15", "2014-8-15", "2014-11-15", "2014-12-15"]),
'value': ["b1", "b2", "b3", "b4", "b5", "b6"]})
#1. Sort the table A and and B each by key.
A = A.sort_values('date')
B = B.sort_values('date')
#2. Add a column tag to A which are all 0 and a column tag to B that are all 1.
A['tag'] = 0
B['tag'] = 1
#3. Delete all columns except the key and tagfrom B (can be omitted, but it is clearer this way) and call the table B'. Keep B as an original - we are going to need it later.
B_ = B[['date','tag']] # You need two [], because you get a series otherwise.
#4. Concatenate A with B' to C and ignore the fact that the rows from B' has many NAs.
C = pd.concat([A, B_])
#5. Sort C by key.
C = C.sort_values('date')
#6. Make a new cumsum column with C = C.assign(groupNr = np.cumsum(C.tag))
C = C.assign(groupNr = np.cumsum(C.tag))
#7. Using filtering (query) on tag get rid of all B'-rows.
C = C[C.tag == 0]
#8. Add a running counter column groupNr to the original B (integers from 0 to N-1 or from 1 to N, depending on whether you want forward or backward rolling join).
B['groupNr'] = range(len(B)+1)[1:] # B's values are carried forward to A's values
B['groupNr'] = range(len(B)) # B's values are carried backward to A's values
#9. Join B with C on groupNr to D.
D = C.set_index('groupNr').join(B.set_index('groupNr'), lsuffix='_A', rsuffix='_B')
标签:pyspark-sql,python,r,data-table,pyspark 来源: https://codeday.me/bug/20190727/1554274.html