ICode9

精准搜索请尝试: 精确搜索
首页 > 编程语言> 文章详细

python – 在PySpark ML中创建自定义Transformer

2019-09-17 19:08:21  阅读:508  来源: 互联网

标签:apache-spark-ml python apache-spark pyspark nltk nltk


我是Spark SQL DataFrames和ML的新手(PySpark).
如何创建服装标记器,例如删除停用词并使用中的某些库?我可以延长默认值吗?

谢谢.

解决方法:

Can I extend the default one?

并不是的.默认Tokenizer是pyspark.ml.wrapper.JavaTransformer的子类,与pyspark.ml.feature中的其他transfromers和估算器一样,将实际处理委托给其Scala对应项.由于您想使用Python,您应该直接扩展pyspark.ml.pipeline.Transformer.

import nltk

from pyspark import keyword_only  ## < 2.0 -> pyspark.ml.util.keyword_only
from pyspark.ml import Transformer
from pyspark.ml.param.shared import HasInputCol, HasOutputCol, Param
from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType, StringType

class NLTKWordPunctTokenizer(Transformer, HasInputCol, HasOutputCol):

    @keyword_only
    def __init__(self, inputCol=None, outputCol=None, stopwords=None):
        super(NLTKWordPunctTokenizer, self).__init__()
        self.stopwords = Param(self, "stopwords", "")
        self._setDefault(stopwords=set())
        kwargs = self._input_kwargs
        self.setParams(**kwargs)

    @keyword_only
    def setParams(self, inputCol=None, outputCol=None, stopwords=None):
        kwargs = self._input_kwargs
        return self._set(**kwargs)

    def setStopwords(self, value):
        self._paramMap[self.stopwords] = value
        return self

    def getStopwords(self):
        return self.getOrDefault(self.stopwords)

    def _transform(self, dataset):
        stopwords = self.getStopwords()

        def f(s):
            tokens = nltk.tokenize.wordpunct_tokenize(s)
            return [t for t in tokens if t.lower() not in stopwords]

        t = ArrayType(StringType())
        out_col = self.getOutputCol()
        in_col = dataset[self.getInputCol()]
        return dataset.withColumn(out_col, udf(f, t)(in_col))

示例用法(来自ML – Features的数据):

sentenceDataFrame = spark.createDataFrame([
  (0, "Hi I heard about Spark"),
  (0, "I wish Java could use case classes"),
  (1, "Logistic regression models are neat")
], ["label", "sentence"])

tokenizer = NLTKWordPunctTokenizer(
    inputCol="sentence", outputCol="words",  
    stopwords=set(nltk.corpus.stopwords.words('english')))

tokenizer.transform(sentenceDataFrame).show()

对于自定义Python Estimator,请参阅How to Roll a Custom Estimator in PySpark mllib

⚠此答案取决于内部API,并与Spark 2.0.3,2.1.1,2.2.0或更高版本(SPARK-19348)兼容.有关与以前Spark版本兼容的代码,请参阅revision 8.

标签:apache-spark-ml,python,apache-spark,pyspark,nltk,nltk
来源: https://codeday.me/bug/20190917/1809845.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有