ml
Machine learning is the practice of teaching a computer to learn. The concept uses pattern recognition, as well as other forms of predictive algorithms, to make judgments on incoming data. This field is closely related to artificial intelligence and computational statistics.
Here are 2,159 public repositories matching this topic...
-
Updated
Jun 4, 2020 - Jupyter Notebook
-
Updated
Jun 15, 2020 - JavaScript
/kind feature
Persona: Infrastructure Engineer
Control Plane Walkthrough:
- Install Kubeflow with (istio) KfDefs
- Install Kubeflow with dex - pending
- Configure / Get Istio IngressGateway endpoint
- Create profiles (w/ kubectl) #4725
- Create notebooks from user-namespace (w/ kubectl)
- Login (port-forward) into notebook as user
- Create TFJob (w/ kubec
"scipy.misc.imsave" was removed in scipy version 1.3
--> pip install scipy==1.2.1
https://docs.scipy.org/doc/scipy/reference/release.1.3.0.html#scipy-interpolate-changes
and functions from scipy.misc (bytescale, fromimage, imfilter, imread, imresize, imrotate, imsave, imshow, toimage) have been removed.
Several parts of the op sec like the main op description, attributes, input and output descriptions become part of the binary that consumes ONNX e.g. onnxruntime causing an increase in its size due to strings that take no part in the execution of the model or its verification.
Setting __ONNX_NO_DOC_STRINGS doesn't really help here since (1) it's not used in the SetDoc(string) overload (s
-
Updated
Nov 21, 2018 - Shell
There are 2 places we are using BufferBlock<T> today:
We should consider replacing this depende
Thank you for submitting an issue. Please refer to our issue policy
for information on what types of issues we address. For help with debugging your code, please refer to Stack Overflow.
Please fill in this template and do not delete it unless you are sure your issue is outs
-
Updated
Dec 17, 2019 - Python
Now we follow the official documents to build and test tensorflow serving with latest dev docker image. The examples can not be run successfully because the tensorflow version is 2.1.0 in tensorflow/serving:latest-devel while the examples code like mnist_saved_model.py still uses TensorFlow 1.x APIs.
We should upgrade the examples to TensorFlow 2 so that users can build and test with latest
已经按照https://www.yuque.com/mnn/cn/cvrt_windows的步骤 ,安装好了protobuf。
但是在进行编译模型转换工具步骤时报错如下:

步骤为:
cd tools/converter
mkdir build
cd build
cmake -G "Ninja" -DMNN_BUILD_SHARED_LIBS=OFF -DCMAKE_BUILD_TYPE=Release ..
想知道问题是什么,感谢~
Hi it is great that you have Open Sourced MetaFlow but there is no mention of the actual license at https://docs.metaflow.org/introduction/what-is-metaflow (or the search engine doesn't find it).
At a great many companies before you can use any Open Source components or workflows you have to get approval from the Lawyers and they tend to be much more comfortable if they can see the license in
I have some values in slots that are surrounded by curly braces and are meant to be returned as is. Instead, the trailing brace is being stripped. "${website}" becomes "${website". I have training examples where the whole "${website}" is included. Is there a way to change this behavior?
Currently as per #403 , lightwood (and by extension mindsdb) fails to install on 32bit python environments.
We should see if there's an easy way to make it work, since 32 bits might still be used for a long time on various embedded device.
If there isn't (or if there is, but it takes too long to implement support) we should add a notification to the docs that you need 64 bit python (where we
.NET Core 2.2 reaches end of life December 23. Either upgrade the version to 3.1 LTS, which is what users should be already doing anyway, or target multiple versions of .NET Core so users who are still on 2.x are not broken.
Feature motivation
Azure storage client is backward incompatible, there's an issue #757 for upgrading the azure connection to the latest client. The new version has an async interface that can be leveraged for the streams module, at least for downloads.
N.B. This is likely a 3~4h work and not a pressing need, so just putting this in the backlog.
Feature implementation
There's a
Dear TF Hub Team,
USE paper Section 5 has a interesting paragraph on evaluation where authors use Arc Cosine (Cos Inverse) whose range is 0 to Pi in radians instead of plain cosine distance with range 0 to 2.
". For the pairwise semantic similarity task, we directly assess
the similarity of the sentence embedding produced by our two encoders. As show
Version
com.microsoft.ml.spark:mmlspark_2.11:jar:0.18.1
spark= 2.4.3
scala=2.11.12
data (csv with header) https://gist.github.com/ttpro1995/69051647a256af912803c9a16040f43a
download data and save as csv file, put into folder /data/public/HIGGS/higgs.test.predictioncsv
val data = spark.read.option("header","true").option("inferSchema", "true").csv("/data/public/HIGGS
Problem
Some of our transformers & estimators are not thoroughly tested or not tested at all.
Solution
Use OpTransformerSpec and OpEstimatorSpec base test specs to provide tests for all existing transformers & estimators.
First of all good job with this library it is amazing
I would like to suggest you one area on README.MD to use-case examples
where any one can fork and submit examples to review then the best examples could be there to everyone use to learn.
Documentation
-
Updated
Feb 11, 2020 - Ruby
Doc strings
We need to be able to (eventually) output docs for modules (e.g. javadoc, godoc, etc). I'm fine with something like
The block comment immediately preceding the first definition of a function (or its type specification) is used as the documentation for it. The block comment immediately preceding the module declaration is used as the overall module documentation.
And maybe use markdown for f
When serving the model locally, a nice swagger documentation page becomes accessible that allows you to send requests to the model by clicking the "Try it out" button.
As part of the swagger specification it is possible to specify the schema the endpoint expects. When doing that, the "Try it out" button will be prefilled with values, so that the API end users can experiment with the endpoint more
We would like all GPflow kernels to broadcast across leading dimensions. For most of them, this is implemented already (#1308); this issue is to keep track of the ones that currently don't:
- ArcCosine
- Coregion
- Periodic
- ChangePoints
- Convolutional
- all MultioutputKernel subclasses
In the examples like tensorflow_mnist or scikit_learn in advisor_client, the config file has the goal MINIMIZE. In the scripts the metric used is accuracy. Am I missing something? Shouldn't the goal be maximize?
PROBLEM FOUND IN DOCS:
The training.transformation.custom_transformer receives a transformation_function as parameter.
The docs say this transformation_function receives a pd.DataFrame and returns a pd.DataFrame but, in fact, it is receiving a pd.Series and returning a pd.Series.
The docs should be update to reflect the implemented behavior.
I am not clear about what the iteration means. Could you explain more about it? Also, how is it related to epoch?
Thanks.
- Wikipedia
- Wikipedia


Please make sure that this is a bug. As per our
GitHub Policy,
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:bug_template
System information
example script provided in TensorFlow): Yes