Python Neural Network Libraries

View 51 business solutions

Browse free open source Python Neural Network Libraries and projects below. Use the toggles on the left to filter open source Python Neural Network Libraries by OS, license, language, programming language, and project status.

  • Get the most trusted enterprise browser Icon
    Get the most trusted enterprise browser

    Advanced built-in security helps IT prevent breaches before they happen

    Defend against security incidents with Chrome Enterprise. Create customizable controls, manage extensions and set proactive alerts to keep your data and employees protected without slowing down productivity.
    Download Chrome
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • 1
    Fairseq

    Fairseq

    Facebook AI Research Sequence-to-Sequence Toolkit written in Python

    Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. We provide reference implementations of various sequence modeling papers. Recent work by Microsoft and Google has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the new FullyShardedDataParallel (FSDP) wrapper provided by fairscale. Fairseq can be extended through user-supplied plug-ins. Models define the neural network architecture and encapsulate all of the learnable parameters. Criterions compute the loss function given the model outputs and targets. Tasks store dictionaries and provide helpers for loading/iterating over Datasets, initializing the Model/Criterion and calculating the loss.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 2
    spaCy

    spaCy

    Industrial-strength Natural Language Processing (NLP)

    spaCy is a library built on the very latest research for advanced Natural Language Processing (NLP) in Python and Cython. Since its inception it was designed to be used for real world applications-- for building real products and gathering real insights. It comes with pretrained statistical models and word vectors, convolutional neural network models, easy deep learning integration and so much more. spaCy is the fastest syntactic parser in the world according to independent benchmarks, with an accuracy within 1% of the best available. It's blazing fast, easy to install and comes with a simple and productive API.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 3
    PrettyTensor

    PrettyTensor

    Pretty Tensor: Fluent Networks in TensorFlow

    Pretty Tensor is a high-level API built on top of TensorFlow that simplifies the process of creating and managing deep learning models. It wraps TensorFlow tensors in a chainable object syntax, allowing developers to build multi-layer neural networks with concise and readable code. Pretty Tensor preserves full compatibility with TensorFlow’s core functionality while providing syntactic sugar for defining complex architectures such as convolutional and recurrent networks. The library’s design emphasizes flexibility and modularity, supporting advanced features like default scopes, parameter templates, and variable reuse. It also allows easy integration with custom operations and third-party libraries, making it ideal for both research experimentation and production-grade modeling. By combining TensorFlow’s power with an intuitive builder-style API, Pretty Tensor accelerates model development without sacrificing transparency or control.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 4
    SFD

    SFD

    S³FD: Single Shot Scale-invariant Face Detector, ICCV, 2017

    S³FD (Single Shot Scale-invariant Face Detector) is a real-time face detection framework designed to handle faces of various sizes with high accuracy using a single deep neural network. Developed by Shifeng Zhang, S³FD introduces a scale-compensation anchor matching strategy and enhanced detection architecture that makes it especially effective for detecting small faces—a long-standing challenge in face detection research. The project builds upon the SSD framework in Caffe, with modifications tailored for face detection tasks. It includes training scripts, evaluation code, and pre-trained models that achieve strong results on popular benchmarks such as AFW, PASCAL Face, FDDB, and WIDER FACE. The framework is optimized for speed and accuracy, making it suitable for both academic research and practical applications in computer vision.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 5
    MMDeploy

    MMDeploy

    OpenMMLab Model Deployment Framework

    MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Models can be exported and run in several backends, and more will be compatible. All kinds of modules in the SDK can be extended, such as Transform for image processing, Net for Neural Network inference, Module for postprocessing and so on. Install and build your target backend. ONNX Runtime is a cross-platform inference and training accelerator compatible with many popular ML/DNN frameworks. Please read getting_started for the basic usage of MMDeploy.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    Alpa

    Alpa

    Training and serving large-scale neural networks

    Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 7
    CNN for Image Retrieval
    cnn-for-image-retrieval is a research-oriented project that demonstrates the use of convolutional neural networks (CNNs) for image retrieval tasks. The repository provides implementations of CNN-based methods to extract feature representations from images and use them for similarity-based retrieval. It focuses on applying deep learning techniques to improve upon traditional handcrafted descriptors by learning features directly from data. The code includes training and evaluation scripts that can be adapted for custom datasets, making it useful for experimenting with retrieval systems in computer vision. By leveraging CNN architectures, the project showcases how learned embeddings can capture semantic similarity across varied images. This resource serves as both an educational reference and a foundation for further exploration in image retrieval research.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 8
    Differentiable Neural Computer

    Differentiable Neural Computer

    A TensorFlow implementation of the Differentiable Neural Computer

    The Differentiable Neural Computer (DNC), developed by Google DeepMind, is a neural network architecture augmented with dynamic external memory, enabling it to learn algorithms and solve complex reasoning tasks. Published in Nature in 2016 under the paper “Hybrid computing using a neural network with dynamic external memory,” the DNC combines the pattern recognition power of neural networks with a memory module that can be written to and read from in a differentiable way. This allows the model to learn how to store and retrieve information across long time horizons, much like a traditional computer. The architecture consists of modular components including an access module for managing memory operations, a controller (often an LSTM or feedforward network) for issuing read/write commands, and submodules for temporal linkage and memory allocation tracking.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 9
    Mixup-CIFAR10

    Mixup-CIFAR10

    mixup: Beyond Empirical Risk Minimization

    mixup-cifar10 is the official PyTorch implementation of “mixup: Beyond Empirical Risk Minimization” (Zhang et al., ICLR 2018), a foundational paper introducing mixup, a simple yet powerful data augmentation technique for training deep neural networks. The core idea of mixup is to generate synthetic training examples by taking convex combinations of pairs of input samples and their labels. By interpolating both data and labels, the model learns smoother decision boundaries and becomes more robust to noise and adversarial examples. This repository implements mixup for the CIFAR-10 dataset, showcasing its effectiveness in improving generalization, stability, and calibration of neural networks. The approach acts as a regularizer, encouraging linear behavior in the feature space between samples, which helps reduce overfitting and enhance performance on unseen data.
    Downloads: 4 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 10
    SVoice (Speech Voice Separation)

    SVoice (Speech Voice Separation)

    We provide a PyTorch implementation of the paper Voice Separation

    SVoice is a PyTorch-based implementation of Facebook Research’s study on speaker voice separation as described in the paper “Voice Separation with an Unknown Number of Multiple Speakers.” This project presents a deep learning framework capable of separating mixed audio sequences where several people speak simultaneously, without prior knowledge of how many speakers are present. The model employs gated neural networks with recurrent processing blocks that disentangle voices over multiple computational steps, while maintaining speaker consistency across output channels. Separate models are trained for different speaker counts, and the largest-capacity model dynamically determines the actual number of speakers in a mixture. The repository includes all necessary scripts for training, dataset preparation, distributed training, evaluation, and audio separation.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 11
    Fast Artificial Neural Network Library is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast. Bindings to more than 15 programming languages are available. An easy to read introduction article and a reference manual accompanies the library with examples and recommendations on how to use the library. Several graphical user interfaces are also available for the library.
    Downloads: 21 This Week
    Last Update:
    See Project
  • 12
    Compare GAN

    Compare GAN

    Compare GAN code

    compare_gan is a research codebase that standardizes how Generative Adversarial Networks are trained and evaluated so results are comparable across papers and datasets. It offers reference implementations for popular GAN architectures and losses, plus a consistent training harness to remove confounding differences in optimization or preprocessing. The library’s evaluation suite includes widely used metrics and diagnostics that quantify sample quality, diversity, and mode coverage. With configuration-driven experiments, you can sweep hyperparameters, run ablations, and log results at scale. The goal is to turn GAN experimentation into a disciplined, repeatable process rather than a patchwork of scripts. It also provides baselines strong enough to serve as starting points for new ideas without re-implementing the world.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    FairChem

    FairChem

    FAIR Chemistry's library of machine learning methods for chemistry

    FAIRChem is a unified library for machine learning in chemistry and materials, consolidating data, pretrained models, demos, and application code into a single, versioned toolkit. Version 2 modernizes the stack with a cleaner core package and breaking changes relative to V1, focusing on simpler installs and a stable API surface for production and research. The centerpiece models (e.g., UMA variants) plug directly into the ASE ecosystem via a FAIRChem calculator, so users can run relaxations, molecular dynamics, spin-state energetics, and surface catalysis workflows with the same pretrained network by switching a task flag. Tasks span heterogeneous domains—catalysis (OC20-style), inorganic materials (OMat), molecules (OMol), MOFs (ODAC), and molecular crystals (OMC)—allowing one model family to serve many simulations. The README provides quick paths for pulling models (e.g., via Hugging Face access), then running energy/force predictions on GPU or CPU.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    Haiku

    Haiku

    JAX-based neural network library

    Haiku is a library built on top of JAX designed to provide simple, composable abstractions for machine learning research. Haiku is a simple neural network library for JAX that enables users to use familiar object-oriented programming models while allowing full access to JAX’s pure function transformations. Haiku is designed to make the common things we do such as managing model parameters and other model state simpler and similar in spirit to the Sonnet library that has been widely used across DeepMind. It preserves Sonnet’s module-based programming model for state management while retaining access to JAX’s function transformations. Haiku can be expected to compose with other libraries and work well with the rest of JAX. Similar to Sonnet modules, Haiku modules are Python objects that hold references to their own parameters, other modules, and methods that apply functions on user inputs.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Photo and Video Editing APIs and SDKs Icon
    Photo and Video Editing APIs and SDKs

    Trusted by 150 million+ creators and businesses globally

    Unlock Picsart's full editing suite by embedding our Editor SDK directly into your platform. Offer your users the power of a full design suite without leaving your site.
    Learn More
  • 15
    Minkowski Engine

    Minkowski Engine

    Auto-diff neural network library for high-dimensional sparse tensors

    The Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, unspooling, and broadcasting operations for sparse tensors. The Minkowski Engine supports various functions that can be built on a sparse tensor. We list a few popular network architectures and applications here. To run the examples, please install the package and run the command in the package root directory. Compressing a neural network to speed up inference and minimize memory footprint has been studied widely. One of the popular techniques for model compression is pruning the weights in convnets, is also known as sparse convolutional networks. Such parameter-space sparsity used for model compression compresses networks that operate on dense tensors and all intermediate activations of these networks are also dense tensors.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    Neural Network Intelligence

    Neural Network Intelligence

    AutoML toolkit for automate machine learning lifecycle

    Neural Network Intelligence is an open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning. NNI (Neural Network Intelligence) is a lightweight but powerful toolkit to help users automate feature engineering, neural architecture search, hyperparameter tuning and model compression. The tool manages automated machine learning (AutoML) experiments, dispatches and runs experiments' trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different training environments like Local Machine, Remote Servers, OpenPAI, Kubeflow, FrameworkController on K8S (AKS etc.) DLWorkspace (aka. DLTS) AML (Azure Machine Learning) and other cloud options. NNI provides CommandLine Tool as well as an user friendly WebUI to manage training experiements.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17
    Stanza

    Stanza

    Stanford NLP Python library for many human languages

    Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Stanza is a Python natural language analysis package. It contains tools, which can be used in a pipeline, to convert a string containing human language text into lists of sentences and words, to generate base forms of those words, their parts of speech and morphological features, to give a syntactic structure dependency parse, and to recognize named entities. The toolkit is designed to be parallel among more than 70 languages, using the Universal Dependencies formalism. Stanza is built with highly accurate neural network components that also enable efficient training and evaluation with your own annotated data.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 18
    Video Nonlocal Net

    Video Nonlocal Net

    Non-local Neural Networks for Video Classification

    video-nonlocal-net implements Non-local Neural Networks for video understanding, adding long-range dependency modeling to 2D/3D ConvNet backbones. Non-local blocks compute attention-like responses across all positions in space-time, allowing a feature at one frame and location to aggregate information from distant frames and regions. This formulation improves action recognition and spatiotemporal reasoning, especially for classes requiring context beyond short temporal windows. The repo provides training recipes and models for standard datasets, as well as ablations that show how many non-local blocks to insert and at which stages. Efficient implementations keep memory and compute manageable so the blocks can be added without rewriting the entire backbone. The result is a practical, drop-in mechanism for upgrading purely local video models into context-aware networks with strong benchmark performance.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    Qualcomm Innovation Center (QuIC) is at the forefront of enabling low-power inference at the edge through its pioneering model-efficiency research. QuIC has a mission to help migrate the ecosystem toward fixed-point inference. With this goal, QuIC presents the AI Model Efficiency Toolkit (AIMET) - a library that provides advanced quantization and compression techniques for trained neural network models. AIMET enables neural networks to run more efficiently on fixed-point AI hardware accelerators. Quantized inference is significantly faster than floating point inference. For example, models that we’ve run on the Qualcomm® Hexagon™ DSP rather than on the Qualcomm® Kryo™ CPU have resulted in a 5x to 15x speedup. Plus, an 8-bit model also has a 4x smaller memory footprint relative to a 32-bit model. However, often when quantizing a machine learning model (e.g., from 32-bit floating point to an 8-bit fixed point value), the model accuracy is sacrificed.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    I3D models trained on Kinetics

    I3D models trained on Kinetics

    Convolutional neural network model for video classification

    Kinetics-I3D, developed by Google DeepMind, provides trained models and implementation code for the Inflated 3D ConvNet (I3D) architecture introduced in the paper “Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset” (CVPR 2017). The I3D model extends the 2D convolutional structure of Inception-v1 into 3D, allowing it to capture spatial and temporal information from videos for action recognition. This repository includes pretrained I3D models on the Kinetics dataset, with both RGB and optical flow input streams. The models have achieved state-of-the-art results on benchmark datasets such as UCF101 and HMDB51, and also won first place in the CVPR 2017 Charades Challenge. The project provides TensorFlow and Sonnet-based implementations, pretrained checkpoints, and example scripts for evaluating or fine-tuning models. It also offers sample data, including preprocessed video frames and optical flow arrays, to demonstrate how to run inference and visualize outputs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    Imagen - Pytorch

    Imagen - Pytorch

    Implementation of Imagen, Google's Text-to-Image Neural Network

    Implementation of Imagen, Google's Text-to-Image Neural Network that beats DALL-E2, in Pytorch. It is the new SOTA for text-to-image synthesis. Architecturally, it is actually much simpler than DALL-E2. It consists of a cascading DDPM conditioned on text embeddings from a large pre-trained T5 model (attention network). It also contains dynamic clipping for improved classifier-free guidance, noise level conditioning, and a memory-efficient unit design. It appears neither CLIP nor prior network is needed after all. And so research continues. For simpler training, you can directly supply text strings instead of precomputing text encodings. (Although for scaling purposes, you will definitely want to precompute the textual embeddings + mask)
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    NeuMan

    NeuMan

    Neural Human Radiance Field from a Single Video (ECCV 2022)

    NeuMan is a reference implementation that reconstructs both an animatable human and its background scene from a single monocular video using neural radiance fields. It supports novel view and novel pose synthesis, enabling compositional results like transferring reconstructed humans into new scenes. The pipeline separates human/body and environment, learning consistent geometry and appearance to support animation. Demos showcase sequences such as dance and handshake, and the code provides guidance for running evaluations and rendering. As a research release, it serves both as a baseline and as a starting point for work on human-centric NeRFs. The emphasis is on practical reconstruction quality from minimal capture setups. Compositional outputs to blend humans and backgrounds. Novel view and novel pose synthesis from learned radiance fields.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 23
    TensorNetwork

    TensorNetwork

    A library for easy and efficient manipulation of tensor networks

    TensorNetwork is a high-level library for building and contracting tensor networks—graphical factorizations of large tensors that underpin many algorithms in physics and machine learning. It abstracts networks as nodes and edges, then compiles efficient contraction orders across multiple numeric backends so users can focus on model structure rather than index bookkeeping. Common network families (MPS/TT, PEPS, MERA, tree networks) are expressed with concise APIs that encourage experimentation and comparison. The library provides automatic path finding and cost estimation, exposing when contractions will explode in memory and suggesting better orders. Because it supports backends such as NumPy, TensorFlow, PyTorch, and JAX, the same model can run on CPUs, GPUs, or TPUs with minimal code changes. Tutorials and visualization helpers make it easier to understand how network topology affects expressive power and computational cost.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    CoreNet

    CoreNet

    CoreNet: A library for training deep neural networks

    CoreNet is Apple’s internal deep learning framework for distributed neural network training, designed for high scalability, low-latency communication, and strong hardware efficiency. It focuses on enabling large-scale model training across clusters of GPUs and accelerators by optimizing data flow and parallelism strategies. CoreNet provides abstractions for data, tensor, and pipeline parallelism, allowing models to scale without code duplication or heavy manual configuration. Its distributed runtime manages synchronization, load balancing, and mixed-precision computation to maximize throughput while minimizing communication bottlenecks. CoreNet integrates tightly with Apple’s proprietary ML stack and hardware, serving as the foundation for research in computer vision, language models, and multimodal systems within Apple AI. The framework includes monitoring tools, fault tolerance mechanisms, and efficient checkpointing for massive training runs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    DeepXDE

    DeepXDE

    A library for scientific machine learning & physics-informed learning

    DeepXDE is a library for scientific machine learning and physics-informed learning. DeepXDE includes the following algorithms. Physics-informed neural network (PINN). Solving different problems. Solving forward/inverse ordinary/partial differential equations (ODEs/PDEs) [SIAM Rev.] Solving forward/inverse integro-differential equations (IDEs) [SIAM Rev.] fPINN: solving forward/inverse fractional PDEs (fPDEs) [SIAM J. Sci. Comput.] NN-arbitrary polynomial chaos (NN-aPC): solving forward/inverse stochastic PDEs (sPDEs) [J. Comput. Phys.] PINN with hard constraints (hPINN): solving inverse design/topology optimization [SIAM J. Sci. Comput.] Residual-based adaptive sampling [SIAM Rev., arXiv] Gradient-enhanced PINN (gPINN) [Comput. Methods Appl. Mech. Eng.] PINN with multi-scale Fourier features [Comput. Methods Appl. Mech. Eng.]
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • Next