autograd
Here are 128 public repositories matching this topic...
-
Updated
Mar 14, 2021 - Jupyter Notebook
-
Updated
Jul 9, 2022 - C++
如何导入pytorch训练好的模型或者权重文件?
Display Issues
-
Updated
Jul 9, 2022 - Python
Implement TabNet
Description
This issue is to create the TabNet model and add it to the basic model zoo. TabNet is a good example of a deep learning model that will work with the tabular modality. Then, it can be trained or tested with an implementation of the CsvDataset such as AirfoilRandomAccess or AmesRandomAccess.
References
- Paper: [TabNet: Attentive Interpretable Tabular Learning](htt
Right now, qml.operation.expand_matrix is often called in a code-block like:
if wire_order is None or self.wires == Wires(wire_order):
return canonical_matrix
expand_matrix(canonical_matrix, wires=self.wires, wire_order=wire_order)
see [pennylane/operation.py Line 587](https://github.com/PennyLaneAI/pennylane/blob/b6fc5380abea6215661704ebe2f5cb8e7a599635/pennylane/operation.p
Issue to track tutorial requests:
- Deep Learning with PyTorch: A 60 Minute Blitz - #69
- Sentence Classification - #79
-
Updated
Jul 7, 2022 - Python
-
Updated
Jul 6, 2022 - OCaml
-
Updated
May 28, 2022 - Nim
-
Updated
Jul 1, 2022 - Python
-
Updated
Jun 28, 2022
-
Updated
Sep 6, 2021 - Python
I recently submitted a PR and got a really nice and friendly welcome from this guy: https://github.com/apps/welcome
We should add similar bots to easen the workflow by triaging issues, PRs, and generally greet people nicely :)
-
Updated
Feb 6, 2022 - Rust
The init module has been deprecated, and the recommend approach for generating initial weights is to use the Template.shape method:
>>> from pennylane.templates import StronglyEntanglingLayers
>>> qml.init.strong_ent_layers_normal(n_layers=3, n_wires=2) # deprecated
>>> np.random.random(StronglyEntanglingLayers.shape(n_layers=3, n_wires=2)) # new approachWe should upd
-
Updated
Oct 8, 2020 - Python
-
Updated
Feb 12, 2022 - Julia
Okay, so this might not exactly be a "good first issue" - it is a little more advanced, but is still very much accessible to newcomers.
Similar to the mygrad.nnet.max_pool function, I would like there to be a mean-pooling layer. That is, a convolution-style windows is strided over the input, an
-
Updated
Apr 19, 2020 - Scala
-
Updated
Feb 9, 2022 - Crystal
-
Updated
Jun 21, 2022 - Python
-
Updated
Apr 28, 2017 - Lua
-
Updated
Jun 27, 2022 - Python
-
Updated
Apr 17, 2021 - Jupyter Notebook
-
Updated
Nov 10, 2021 - Swift
-
Updated
Jun 30, 2022 - Python
Improve this page
Add a description, image, and links to the autograd topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the autograd topic, visit your repo's landing page and select "manage topics."

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.


The current implementation of Zero Redundancy optimizer has its own implementation of object broadcasting.
We should replace it with c10d [broadcast_object_list](https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast_object_lis