pyspark.sql.Window.rangeBetween¶
-
static
Window.
rangeBetween
(start: int, end: int) → pyspark.sql.window.WindowSpec[source]¶ Creates a
WindowSpec
with the frame boundaries defined, from start (inclusive) to end (inclusive).Both start and end are relative from the current row. For example, “0” means “current row”, while “-1” means one off before the current row, and “5” means the five off after the current row.
We recommend users use
Window.unboundedPreceding
,Window.unboundedFollowing
, andWindow.currentRow
to specify special boundary values, rather than using integral values directly.A range-based boundary is based on the actual value of the ORDER BY expression(s). An offset is used to alter the value of the ORDER BY expression, for instance if the current ORDER BY expression has a value of 10 and the lower bound offset is -3, the resulting lower bound for the current row will be 10 - 3 = 7. This however puts a number of constraints on the ORDER BY expressions: there can be only one expression and this expression must have a numerical data type. An exception can be made when the offset is unbounded, because no value modification is needed, in this case multiple and non-numeric ORDER BY expression are allowed.
New in version 2.1.0.
- Parameters
- startint
boundary start, inclusive. The frame is unbounded if this is
Window.unboundedPreceding
, or any value less than or equal to max(-sys.maxsize, -9223372036854775808).- endint
boundary end, inclusive. The frame is unbounded if this is
Window.unboundedFollowing
, or any value greater than or equal to min(sys.maxsize, 9223372036854775807).
- Returns
- class
WindowSpec A
WindowSpec
with the frame boundaries defined, from start (inclusive) to end (inclusive).
Examples
>>> from pyspark.sql import Window >>> from pyspark.sql import functions as func >>> df = spark.createDataFrame( ... [(1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b")], ["id", "category"]) >>> df.show() +---+--------+ | id|category| +---+--------+ | 1| a| | 1| a| | 2| a| | 1| b| | 2| b| | 3| b| +---+--------+
Calculate sum of
id
in the range fromid
of currentRow toid
of currentRow + 1 in partitioncategory
>>> window = Window.partitionBy("category").orderBy("id").rangeBetween(Window.currentRow, 1) >>> df.withColumn("sum", func.sum("id").over(window)).sort("id", "category").show() +---+--------+---+ | id|category|sum| +---+--------+---+ | 1| a| 4| | 1| a| 4| | 1| b| 3| | 2| a| 2| | 2| b| 5| | 3| b| 3| +---+--------+---+