hyperparameter-optimization
Here are 587 public repositories matching this topic...
Describe the issue:
During computing Channel Dependencies reshape_break_channel_dependency does following code to ensure that the number of input channels equals the number of output channels:
in_shape = op_node.auxiliary['in_shape']
out_shape = op_node.auxiliary['out_shape']
in_channel = in_shape[1]
out_channel = out_shape[1]
return in_channel != out_channel
This is correct
-
Updated
Mar 3, 2022 - Python
Expected behavior
GridSampler does not have a seed argument, but it randomly shuffles an evaluation order of trials. It makes optimization results unreproducible if objective functions depend on previous trials, e.g., pruning. We should add a seed argument like TPESampler and RandomSampler.
Environment
- Optuna version: 3.0.0b1.dev
- Python version: 3.8.6
- OS: macOS-10.16-x
Can Autosklearn handle Multi-Class/Multi-Label Classification and which classifiers will it use?
I have been trying to use AutoSklearn with Multi-class classification
so my labels are like this
0 1 2 3 4 ... 200
1 0 1 1 1 ... 1
0 1 0 0 1 ... 0
1 0 0 1 0 ... 0
1 1 0 1 0 ... 1
0 1 1 0 1 ... 0
1 1 1 0 0 ... 1
1 0 1 0 1 ... 0
I used this code
`
y = y[:, (65,67,54,133,122,63,102
Related: awslabs/autogluon#1479
Add a scikit-learn compatible API wrapper of TabularPredictor:
- TabularClassifier
- TabularRegressor
Required functionality (may need more than listed):
- init API
- fit API
- predict API
- works in sklearn pipelines
-
Updated
May 9, 2022 - Python
-
Updated
Jan 3, 2022
-
Updated
Feb 3, 2022 - Python
-
Updated
Nov 19, 2021 - Python
When using r2 as eval metric for regression task (with 'Explain' mode) the metric values reported in Leaderboard (at README.md file) are multiplied by -1.
For instance, the metric value for some model shown in the Leaderboard is -0.41, while when clicking the model name leads to the detailed results page - and there the value of r2 is 0.41.
I've noticed that when one of R2 metric values in the L
@HuangChiEn From the console msg, it is stuck at the step of building the ensemble model (sorry for not making that explicit in the msg). You can verify it by removing "ensemble": True from the settings.
Originally posted by @sonichi in microsoft/FLAML#536 (comment)
Suggestion: Modify https://github.com/microsoft/FLAML/blob/c1e1299855dcea378591628a
-
Updated
May 7, 2022 - Python
-
Updated
Feb 10, 2021 - Python
-
Updated
Apr 23, 2022 - Python
-
Updated
Jun 6, 2018 - Python
-
Updated
Apr 24, 2022 - Jupyter Notebook
-
Updated
Feb 6, 2021 - Python
-
Updated
Apr 4, 2022 - Jupyter Notebook
-
Updated
Feb 27, 2022 - Python
-
Updated
Jun 19, 2021
-
Updated
Oct 14, 2021 - JavaScript
-
Updated
May 9, 2022 - Python
-
Updated
Jan 20, 2021 - Python
-
Updated
May 7, 2022 - Python
-
Updated
Aug 15, 2018 - Python
-
Updated
Apr 22, 2022 - Python
If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
Describe the bug
Code could be more conform to pep8 and so forth.
Expected behavior
Less code st
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."


Description
Per https://discuss.ray.io/t/how-do-i-sample-from-a-ray-datasets/5308, we should add a
random_sample(N)API that returns N records from a Dataset. This can be implemented via amap_batches()followed by a take().cc @simon-mo @clarkzinzow
Use case
Random sample is useful for a variety of scenarios, including creating training batches, and downsampling the dataset for