pyspark.pandas.groupby.SeriesGroupBy.value_counts¶
-
SeriesGroupBy.
value_counts
(sort: Optional[bool] = None, ascending: Optional[bool] = None, dropna: bool = True) → pyspark.pandas.series.Series[source]¶ Compute group sizes.
- Parameters
- sortboolean, default None
Sort by frequencies.
- ascendingboolean, default False
Sort in ascending order.
- dropnaboolean, default True
Don’t include counts of NaN.
Examples
>>> df = ps.DataFrame({'A': [1, 2, 2, 3, 3, 3], ... 'B': [1, 1, 2, 3, 3, 3]}, ... columns=['A', 'B']) >>> df A B 0 1 1 1 2 1 2 2 2 3 3 3 4 3 3 5 3 3
>>> df.groupby('A')['B'].value_counts().sort_index() A B 1 1 1 2 1 1 2 1 3 3 3 Name: B, dtype: int64