Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
AscendNPU IR documentation
AscendNPU IR documentation

Introduction

  • Project Overview
  • Quick Start
    • Build and Installation
    • Compile and run example
  • Architecture Design

User Guide

  • Compile Options
  • Debug and Tune
  • Best Practices

Developer Guide

  • Conversion Guide
    • IR Interface Overview
    • Triton Integration
    • TileLang Integration
    • Framework Integration
  • Dialects
    • ‘hfusion’ Dialect
    • ‘hivm’ Dialect
    • ‘hacc’ Dialect
    • ‘scope’ Dialect
    • ‘annotation’ Dialect
    • ‘symbol’ Dialect
    • ‘mathExt’ Dialect
    • ‘memref_ext’ Dialect
  • Passes
    • ‘hfusion’ Dialect Passes
    • ‘hivm’ Dialect Passes
    • ‘hacc’ Dialect Passes
    • ‘scope’ Dialect Passes
    • ‘annotation’ Dialect Passes
    • ‘symbol’ Dialect Passes
  • Key Features
    • Auto Blockify
    • Auto Flatten
    • AutoSchedule
    • Cube–Vector optimization overview
    • Auto-Subtiling
    • Auto-Sync
    • Tile Cube and Vector Loop
    • Cube–Vector software pipelining
    • CustomOp
    • Debug module (DFX)
    • Plan Memory

Contributing

  • Contributing Guide
  • AscendNPU IR Users

FAQ

  • FAQ

Reference

  • Related Projects and Acknowledgments
  • Talks and Courses
Back to top
View this page

Related projects and thanks¶

This document lists open-source projects and ecosystems closely related to AscendNPU IR and thanks the LLVM/MLIR and other communities.

MLIR¶

MLIR originates from the LLVM community and provides reusable, extensible compiler infrastructure. AscendNPU IR is built on MLIR. We thank all developers and contributors in the LLVM/MLIR community. AscendNPU IR benefits from MLIR in these ways:

  • Modular design: Define IR at different abstraction levels for progressive lowering.

  • Reuse of infrastructure: Parsing, transformation, optimization, and code generation from MLIR.

  • Ecosystem interoperability: Extend MLIR dialects to interact and convert with other dialects (e.g. TensorFlow, PyTorch IR) and integrate with upper-level frameworks.

Triton-Ascend¶

Triton-Ascend brings Triton programming to Ascend, so Triton code runs efficiently on Ascend hardware. AscendNPU IR serves as the compilation backend for Triton, enabling developers to write high-performance kernels for Ascend NPU with familiar Triton syntax and programming model and lowering the barrier for Python developers.

TileLang-Ascend¶

TileLang is a domain-specific language for tensor computation; TileLang-Ascend is its Ascend-oriented version. By using AscendNPU IR as the backend, TileLang-Ascend leverages AscendNPU IR’s Ascend-aware optimizations to generate high-performance Ascend operators.

Next
Talks and courses
Previous
FAQ
Copyright © 2026, Huawei
Made with Sphinx and @pradyunsg's Furo
Last updated on Apr 17, 2026
On this page
  • Related projects and thanks
    • MLIR
    • Triton-Ascend
    • TileLang-Ascend