如何在 PySpark 中将数据框列从 String 类型更改为 Double 类型?
我有一个列作为字符串的数据框.我想在 PySpark 中将列类型更改为 Double 类型.
I have a dataframe with column as String. I wanted to change the column type to Double type in PySpark.
以下是方法,我做到了:
Following is the way, I did:
toDoublefunc = UserDefinedFunction(lambda x: x,DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))
只是想知道,这是跑步时的正确方法吗通过逻辑回归,我得到了一些错误,所以我想知道,这就是麻烦的原因.
Just wanted to know, is this the right way to do it as while running through Logistic Regression, I am getting some error, so I wonder, is this the reason for the trouble.
这里不需要 UDF.Column
已经提供了 cast
方法 使用 DataType
instance :
There is no need for an UDF here. Column
already provides cast
method with DataType
instance :
from pyspark.sql.types import DoubleType
changedTypedf = joindf.withColumn("label", joindf["show"].cast(DoubleType()))
或短字符串:
changedTypedf = joindf.withColumn("label", joindf["show"].cast("double"))
其中规范字符串名称(也可以支持其他变体)对应于 simpleString
值.所以对于原子类型:
where canonical string names (other variations can be supported as well) correspond to simpleString
value. So for atomic types:
from pyspark.sql import types
for t in ['BinaryType', 'BooleanType', 'ByteType', 'DateType',
'DecimalType', 'DoubleType', 'FloatType', 'IntegerType',
'LongType', 'ShortType', 'StringType', 'TimestampType']:
print(f"{t}: {getattr(types, t)().simpleString()}")
BinaryType: binary
BooleanType: boolean
ByteType: tinyint
DateType: date
DecimalType: decimal(10,0)
DoubleType: double
FloatType: float
IntegerType: int
LongType: bigint
ShortType: smallint
StringType: string
TimestampType: timestamp
例如复杂类型
types.ArrayType(types.IntegerType()).simpleString()
'array<int>'
types.MapType(types.StringType(), types.IntegerType()).simpleString()
'map<string,int>'