0
respostas

[Dúvida] Erro executar programa client_tweeter.py

Ola, a exception abaixo aconteceu quando tento executar o programa client_tweeter.py

Por favor, como faco para resolver isso....

**Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel).** For SparkR, use setLogLevel(newLevel). 23/03/06 12:50:17 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 23/03/06 12:50:24 WARN TextSocketSourceProvider: The socket source should not be used for production applications! It does not support recovery. 23/03/06 12:50:27 WARN ResolveWriteToStream: Temporary checkpoint location created which is deleted normally when the query didn't fail: C:\Users\flavi_000\AppData\Local\Temp\temporary-6d8c8a3b-97d6-4015-95d7-9d5bb3c7c0d1. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort. 23/03/06 12:50:27 WARN ResolveWriteToStream: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 23/03/06 12:50:29 ERROR MicroBatchExecution: Query [id = 909e0e49-3a4f-477b-bbbb-01b7f459a97c, runId = a1c5b82d-5dcf-46d5-96a6-3007e1835fc0] terminated with error java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:793) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1218) at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1423) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:177) at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:548) at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1915) at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1911) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1917) at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1876) at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1835) at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.list(CheckpointFileManager.scala:316) at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.getLatestBatchId(HDFSMetadataLog.scala:213) at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.getLatest(HDFSMetadataLog.scala:220) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:223) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:375) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:373) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:219) at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:67) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:213) at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:307) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)