Julia is a high-level, high-performance, dynamic programming language. While it is a general-purpose language and can be used to write any application, many of its features are well-suited for high-performance numerical analysis and computational science.
Julia supports all hardware, including GPUs and TPUs on every cloud. Finally Machine-Learning In Julia is Getting Better making Julia more usable as a web framework every day.
The dynamic language has unique features making it best fit for Deep Learning application.
TOP 10 MACHINE LEARNING FRAMEWORKS IN JULIA IN 2021
MLBase.jl is a Julia package that provides useful tools for machine learning applications. It can be considered as a Swiss knife for you when you are writing machine learning codes.
This package does not implement specific machine learning algorithms. Instead, it provides a collection of useful tools to support machine learning programs, including:
- Data manipulation & preprocessing
- Score-based classification
- Performance evaluation (e.g. evaluating ROC)
- Cross validation
- Model tuning (i.e. search best settings of parameters)
Strada is a Deep Learning library for Julia, based on the popular Caffe framework developed by BVLC and the Berkeley computer vision community. It supports convolutional and recurrent neural network training, both on the CPU and GPU. Some highlights:
- Simplicity both for advanced users and novices. It is easy to install and also a good platform for teaching.
- Flexibility expecially for doing research in various different domains like mathematical optimization, computer vision, natural language and reinforcement learning.
- Integration with Julia: Strada is distributed with a version of Caffe that has minimal dependencies and was integrated with Julia’s linear algebra subroutines, Julia’s tensor manipulation routines and Julia’s error handling system.
- Support of Caffe features: It is easy to rebase Strada to a different Caffe version with additional pull request integrated (for example with multi GPU support)
- Open source: Strada is distributed under a BSD licence.
TextAnalysis.jl is an actively-developed Julia library for text analysis. It provides functionality for the preprocessing of documents, corpus creation, document term matrices, TF-IDF, Latent Semantic Analysis, Latent Dirichlet Allocation, and more.
It seems to be the place to start if you are interested in text analytics using Julia.
MXNet.jl is the Julia package of dmlc/mxnet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlight of features include:
- Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes.
- Flexible symbolic manipulation to composite and construct state-of-the-art deep learning models.
The scikit-learn Python library has proven very popular with machine learning researchers and data scientists in the last five years. It provides a uniform interface for training and using models, as well as a set of tools for chaining (pipelines), evaluating, and tuning model hyperparameters. ScikitLearn.jl brings these capabilities to Julia. Its primary goal is to integrate both Julia- and Python-defined models together into the scikit-learn framework.
A wrapper around TensorFlow, a popular open source machine learning framework from Google. This wrapper can be utilized for various purposes like fast ingestion of data, mainly data in different formats, fast postprocessing of inference results
Knet is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports GPU operation and automatic differentiation using dynamic computational graphs for models defined in plain Julia.
Knet is an open-source project and we are always open to new contributions: bug reports and fixes, feature requests and contributions, new machine learning models and operators.
Flux: The Julia Machine Learning Library
Flux is a library for machine learning. It comes “batteries-included” with many useful tools built in, but also lets you use the full power of the Julia language where you need it.
Mocha is a Deep Learning framework for Julia, inspired by the C++ framework Caffe. Efficient implementations of general stochastic gradient solvers and common layers in Mocha could be used to train deep / shallow (convolutional) neural networks, with (optional) unsupervised pre-training via (stacked) auto-encoders. Some highlights:
- Modular Architecture: Mocha has a clean architecture with isolated components like network layers, activation functions, solvers, regularizers, initializers, etc. Built-in components are sufficient for typical deep (convolutional) neural network applications
- High-level Interface: Mocha is written in Julia, a high-level dynamic programming language designed for scientific computing. Combining with the expressive power of Julia and other its package eco-system, playing with deep neural networks in Mocha is easy and intuitive.
If you need more algorithm-specific machine learning projects for Julia, Glance here