Show distinct column values in pyspark dataframe

With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().

I want to list out all the unique values in a pyspark dataframe column.

Not the SQL type way (registertemplate then SQL query for distinct values).

Also I don’t need groupby then countDistinct, instead I want to check distinct VALUES in that column.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.