We're updating the issue view to help you get more done. 

Spark Pipeline imports do not work in PySparkling

Description

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/kuba/devel/programs/spark-2.3.0-bin-hadoop2.7/python/pyspark/ml/util.py", line 311, in load return cls.read().load(path) File "/Users/kuba/devel/programs/spark-2.3.0-bin-hadoop2.7/python/pyspark/ml/pipeline.py", line 207, in load return JavaMLReader(self.cls).load(path) File "/Users/kuba/devel/programs/spark-2.3.0-bin-hadoop2.7/python/pyspark/ml/util.py", line 253, in load return self._clazz._from_java(java_obj) File "/Users/kuba/devel/programs/spark-2.3.0-bin-hadoop2.7/python/pyspark/ml/pipeline.py", line 154, in _from_java py_stages = [JavaParams._from_java(s) for s in java_stage.getStages()] File "/Users/kuba/devel/programs/spark-2.3.0-bin-hadoop2.7/python/pyspark/ml/wrapper.py", line 220, in _from_java py_type = __get_class(stage_name) File "/Users/kuba/devel/programs/spark-2.3.0-bin-hadoop2.7/python/pyspark/ml/wrapper.py", line 214, in __get_class m = __import__(module) ImportError: No module named h2o.algos

Environment

None

Status

Assignee

Jakub Hava

Reporter

Jakub Hava

Labels

None

Release Priority

None

CustomerVisible

No

testcase 1

None

testcase 2

None

testcase 3

None

h2ostream link

None

Affected Spark version

None

AffectedContact

None

AffectedCustomers

None

AffectedPilots

None

AffectedOpenSource

None

Support Assessment

None

Customer Request Type

None

Support ticket URL

None

End date

None

Baseline start date

None

Baseline end date

None

Task progress

None

Task mode

None

Fix versions

Priority

Major