DataFrame儲存為hive表時的換行符問題

mvpboss1004發表於2020-11-09

pyspark的DataFrame,在直接儲存為hive表時,如果字串中帶有換行符,會導致換行錯誤。以spark 3.0.0版本為例。我們向hive表儲存1條包含換行符字串的資料,統計行數時卻得到2行:

>>> df = spark.createDataFrame([(1,‘hello\nworld’)], (‘id’,‘msg’))
>>> df.write.format(‘hive’).saveAsTable(‘test.newline0’)
>>> spark.sql(‘SELECT COUNT(1) FROM test.newline0’).show()
±-------+
|count(1)|
±-------+
| 2|
±-------+

這一問題的相關文件我找了很久,最後發現是在Specifying storage format for Hive tables一節。直接使用hive格式儲存時,底層是’textfile’且預設換行符是’\n’,因此自然會出現換行錯誤。可以通過以下程式碼進行驗證:

>>> df.write.format(‘hive’).option(‘fileFormat’, ‘textfile’).option(‘lineDelim’, ‘\x13’).saveAsTable(‘test.newline1’)
Traceback (most recent call last):
File “”, line 1, in
File “/usr/share/spark-3.0.0-bin-hadoop2.7/python/pyspark/sql/readwriter.py”, line 868, in saveAsTable
self._jwrite.saveAsTable(name)
File “/usr/share/spark-3.0.0-bin-hadoop2.7/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py”, line 1305, in call
File “/usr/share/spark-3.0.0-bin-hadoop2.7/python/pyspark/sql/utils.py”, line 137, in deco
raise_from(converted)
File “”, line 3, in raise_from
pyspark.sql.utils.IllegalArgumentException: Hive data source only support newline ‘\n’ as line delimiter, but given:

解決的方法也很簡單,使用其他格式進行儲存:

>>> df.write.format(‘hive’).option(‘fileFormat’, ‘parquet’).saveAsTable(‘test.newline1’)
>>> spark.sql(‘SELECT COUNT(1) FROM test.newline1’).show()
±-------+
|count(1)|
±-------+
| 1|
±-------+

相關文章