-
Updated
May 2, 2021
probabilistic-programming
Here are 360 public repositories matching this topic...
-
Updated
Nov 18, 2021 - Python
-
Updated
Oct 22, 2019 - Jupyter Notebook
-
Updated
Nov 12, 2021 - Python
Are there any plans to add a Zero-Inflated Poisson (ZIP) and Zero-Inflated Negative Binomial (ZINB) to TFP? Those are usually very common distributions in other packages, and it shouldn't be hard to implement.
-
Updated
Jan 9, 2020 - Python
Ankit Shah and I are trying to use Gen to support a project and would love the addition of a dirichlet distribution
-
Updated
Jul 26, 2021 - Jupyter Notebook
-
Updated
Nov 11, 2021 - Julia
-
Updated
Mar 15, 2021 - Go
Currently, random_flax_module and random_haiku_module assume that prior is either a distribution or a dict of distributions. In case we have a large network and want to specify priors for specific parameters like the
-
Updated
Feb 17, 2021 - Python
-
Updated
Aug 7, 2020 - Python
-
Updated
May 27, 2021
-
Updated
Jan 17, 2020 - Swift
-
Updated
Nov 16, 2021 - Python
-
Updated
Nov 18, 2021 - Python
-
Updated
Oct 2, 2020 - JavaScript
-
Updated
Aug 13, 2021 - Jupyter Notebook
The current example on MDN from Edward tutorials needs small modifications to run on edward2. Documentation covering these modifications will be appreciated.
-
Updated
Nov 15, 2021 - Julia
Hi,
Looks like there is support for lots of common distribution. There are a handful of other distributions which are not presently supported but could (fingers crossed) be easily implemented. Looking at [Stan's Function Reference] I see...
- Beta Binomial
- [Chi-Square](https://mc-stan.org/docs/2
Improve tests
-
Updated
Nov 2, 2020 - Haskell
There are a variety of interesting optimisations that can be performed on kernels of the form
k(x, z) = w_1 * k_1(x, z) + w_2 * k_2(x, z) + ... + w_L k_L(x, z)A naive recursive implementation in terms of the current Sum and Scaled kernels hides opportunities for parallelism in the computation of each term, and the summation over terms.
Notable examples of kernels with th
Plotting Docs
GPU Support
Rather than trying to rebuild all functionality from Distributions.jl, we're first focusing on reimplementing logdensity (logpdf in Distributions), and delegating most other functions to the current Distributions implementations.
So for example, we have
distproxy(d::Normal{(:μ, :σ)}) = Dists.Normal(d.μ, d.σ)This makes some functions in Distributions.jl available through
-
Updated
Sep 12, 2019 - Scala
-
Updated
Nov 15, 2021 - JavaScript
See discussion #134. It is not clear enough what integration_steps means in the context of NUTS. This is however very important for anyone who wants to know how "efficient" a sampler is in terms of the number of gradient evaluations. The docstring should be improved, and the quantity rename to respect what is done in the rest of the library.
-
Updated
Mar 17, 2021 - C#
Improve this page
Add a description, image, and links to the probabilistic-programming topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the probabilistic-programming topic, visit your repo's landing page and select "manage topics."


Pyro's HMC and NUTS implementations are feature-complete and well-tested, but they are quite slow in models like the one in our Bayesian regression tutorial that operate on small tensors for reasons that are largely beyond our control (mostly having to do with the design and implementation of
torch.autograd), which is unfortunate because these