Pykan is the official implementation of the research presented in the papers KAN: Kolmogorov-Arnold Networks and KAN 2.0: Kolmogorov-Arnold Networks Meet Science. KANs are designed as an innovative alternative to traditional Multi-Layer Perceptrons (MLPs), offering superior accuracy and interpretability by leveraging the Kolmogorov-Arnold representation theorem. Unlike MLPs, which place activation functions on nodes, KANs place them on edges, resulting in a dual architecture that can outperform MLPs in various scientific and machine learning tasks.
Key Features
- Mathematical Foundation: Built on the Kolmogorov-Arnold representation theorem, ensuring strong theoretical backing.
- Improved Performance: Often delivers better accuracy and interpretability compared to MLPs.
- Easy Installation: Available via PyPI (
pip install pykan
) or directly from GitHub. - Comprehensive Documentation: Includes quickstart guides, tutorials, and detailed documentation to help users get started quickly.
- Flexible Training: Supports training on CPUs and includes tools for hyperparameter tuning, grid extension, and sparsification for interpretability.
Use Cases
- Scientific Computing: Ideal for tasks requiring high precision and interpretability, such as solving partial differential equations (PDEs).
- Machine Learning: Suitable for small to medium-scale tasks where accuracy and model transparency are crucial.
Installation and Setup
Pykan requires Python 3.9.7 or higher and can be installed via pip or Conda. Detailed setup instructions, including environment configuration and dependency management, are provided in the repository.
Community and Collaboration
The project welcomes contributions, especially from researchers and developers interested in scientific discoveries and scientific computing. The maintainer encourages users to explore KANs critically and share their findings to push the boundaries of this innovative approach.
For more details, visit the GitHub repository and check out the documentation.