pyspark
Here are 1,928 public repositories matching this topic...
-
Updated
Apr 21, 2022 - Scala
-
Updated
Apr 21, 2022 - Java
-
Updated
Apr 21, 2022 - Python
-
Updated
Apr 7, 2021 - Jupyter Notebook
Hello everyone,
Recently I tried to set up petastorm on my company's hadoop cluster.
However as the cluster uses Kerberos for authentication using petastorm failed.
I figured out that petastorm relies on pyarrow which actually supports kerberos authentication.
I hacked "petastorm/petastorm/hdfs/namenode.py" line 250
and replaced it with
driver = 'libhdfs'
return pyarrow.hdfs.c-
Updated
Mar 15, 2022 - Shell
-
Updated
Apr 21, 2022 - Python
-
Updated
Apr 12, 2022 - Java
if they are not class methods then the method would be invoked for every test and a session would be created for each of those tests.
`class PySparkTest(unittest.TestCase):
@classmethod
def suppress_py4j_logging(cls):
logger = logging.getLogger('py4j')
logger.setLevel(logging.WARN)
@classmethod
def create_testing_pyspark_session(cls):
return Sp
-
Updated
Feb 12, 2022 - Vue
-
Updated
Apr 21, 2022 - Python
-
Updated
Feb 11, 2022 - Jupyter Notebook
User story
As a user, I quickly want to connect my Snowflake data warehouse with Kuwala to start applying transformations. I only want to put in my credentials and establish the connection. Once connected, I want to see the database schema to see all available tables. For every existing table, I want to see a preview of the data and the column types.
Acceptance criteria
- The
-
Updated
Mar 30, 2021 - Python
-
Updated
Apr 17, 2022 - Jupyter Notebook
-
Updated
Jun 6, 2017
-
Updated
Mar 27, 2021 - Python
-
Updated
Nov 21, 2021 - Python
-
Updated
Oct 2, 2019 - Python
These files belong to the Gimel Discovery Service, which is still Work-In-Progress in PayPal & not yet open sourced. In addition, the logic in these files are outdated & hence it does not make sense to have these files in the repo.
https://github.com/paypal/gimel/search?l=Shell
Remove --> gimel-dataapi/gimel-core/src/main/scripts/tools/bin/hbase/hbase_ddl_creator.sh
-
Updated
Jul 7, 2020 - Jupyter Notebook
Pivot missing categories breaks FeatureSet/AggregatedFeatureSet
Summary
When defining a feature set, it's expected that pivot will have all categories and, as a consequence, the resulting Source dataframe will be suitable to be transformed. When a different behavior happens, FeatureSet and AggregatedFeatureSet breaks.
Feature related:
Age: legacy
-
Updated
Apr 21, 2022 - Python
-
Updated
Apr 14, 2022 - Jupyter Notebook
Improve this page
Add a description, image, and links to the pyspark topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pyspark topic, visit your repo's landing page and select "manage topics."


I have a simple regression task (using a LightGBMRegressor) where I want to penalize negative predictions more than positive ones. Is there a way to achieve this with the default regression LightGBM objectives (see https://lightgbm.readthedocs.io/en/latest/Parameters.html)? If not, is it somehow possible to define (many example for default LightGBM model) and pass a custom regression objective?