# 'hivm' Dialect HIVM (Hybrid Intelligence Virtual Machine) dialect. ## Operations ### `hivm.hir.atomic_cas` (hivm::AtomicCasOp) _Atomic Compare-And-Swap (CAS) Op_ Syntax: ```mlir operation ::= `hivm.hir.atomic_cas` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`->` type($result_tensor)^)? ``` Compare-And-Swap (CAS) is an atomic operation that consists of three operands: Memory location (V), Expected old value (A), New value (B). The semantics of the operation are: the value of V is updated to B, only if the value of memory location V is equal to the expected old value A. The operation returns the original value of V regardless of whether it is updated or not. Constraints: 1. The input memref and output memref must have the same rank and the same element type. Arguments: * `src0`: expected old value * `src1`: new value * `dst`: memory location in GM Examples: ```mlir hivm.hir.atomic_cas ins(%src0, %src1 : memref, memref) outs(%dst : memref) %result = hivm.hir.atomic_cas ins(%src0, %src1 : tensor, tensor) outs(%dst : tensor) -> tensor ``` #### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of Tensor or Memref | `dst` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | Tensor or Memref ### `hivm.hir.atomic_rmw` (hivm::AtomicRMWOp) _Atomic RMW Op_ Syntax: ```mlir operation ::= `hivm.hir.atomic_rmw` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` `atomic_kind` `=` $atomic_kind (`->` type($result_tensor)^)? ``` Atomic exchange is an atomic operation that consists of three steps: 1. Read the current value of the specified memory address 2. Perform action on value depending on atomic_kind attr 3. Return the old value read previously The whole process is atomic, that is, it will not be interrupted by other threads during the operation. Constraints: 1. The input memref and output memref must have the same rank and the same element type. Arguments: * `src`: new value * `dst`: memory location in GM Examples: ```mlir hivm.hir.atomic_rmw ins(%src : memref) outs(%dst : memref) atomic_kind = %result = hivm.hir.atomic_rmw ins(%src : tensor) outs(%dst : tensor) atomic_kind = -> tensor ``` Interfaces: `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `InferCoreTypeInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
atomic_kind::mlir::hivm::AtomicKindAttr
Atomic Operation Kind for StoreOp{{% markdown %}} HIVM atomic store kind attribute. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | any type | `dst` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | Tensor or Memref ### `hivm.hir.atomic_xchg` (hivm::AtomicXchgOp) _Atomic Exchange Op_ Syntax: ```mlir operation ::= `hivm.hir.atomic_xchg` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`mask` `(` $mask^ `:` type($mask) `)`)? (`->` type($result_tensor)^)? ``` Atomic exchange is an atomic operation that consists of three steps: 1. Read the current value of the specified memory address 2. Write the new value to the memory address 3. Return the old value read previously The whole process is atomic, that is, it will not be interrupted by other threads during the operation. Constraints: 1. The input memref and output memref must have the same rank and the same element type. Arguments: * `src`: new value * `dst`: memory location in GM * `mask`: mask the element Examples: ```mlir hivm.hir.atomic_xchg ins(%src : memref) outs(%dst : memref) %result = hivm.hir.atomic_cas ins(%src : tensor) outs(%dst : tensor) -> tensor ``` #### Operands | Operand | Description | | :-----: | ----------- | | `src` | any type | `dst` | Tensor or Memref | `mask` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | Tensor or Memref ### `hivm.hir.batchMmadL1` (hivm::BatchMmadL1Op) _Batch Matrix Multiply and Add Op with inputs from L1 memory hierarchy._ Syntax: ```mlir operation ::= `hivm.hir.batchMmadL1` attr-dict `ins` `(` $a `,` $b `,` $init_condition `,` $real_m `,` $real_k `,` $real_n (`,` $per_channel_bias^)? `:` type($a) `,` type($b) `,` type($init_condition) `,` type($real_m) `,` type($real_k) `,` type($real_n) (`,` type($per_channel_bias)^)? `)` `outs` `(` $c `:` type($c) `)` (`sync_related_args` `(` $sync_related_args^ `:` type($sync_related_args) `)`)? (`unit_flag` `[` $unit_flag_mode^ (`,` $unit_flag_cond^)? `]`)? (`->` type($result_tensors)^)? ``` The computation logic is: ```text C = C + A x B + (optional) channel_bias ``` Note: the rank of A, B, and C Matrix must be three, where the 0-th dimension being the batch dimension. Traits: `AttrSizedOperandSegments`, `CubeCoreTypeTrait`, `MacroOpPipeTrait`, `MacroOpTrait` Interfaces: `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `HIVMUnitFlagEnabledInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
a_transpose::mlir::UnitAttrunit attribute
b_transpose::mlir::UnitAttrunit attribute
enable_HF32::mlir::UnitAttrunit attribute
unit_flag_mode::mlir::hivm::UnitFlagAttr
{{% markdown %}} HIVM unit flag attribute for synchronization. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `a` | Tensor or Memref | `b` | Tensor or Memref | `init_condition` | 1-bit signless integer | `real_m` | index | `real_k` | index | `real_n` | index | `c` | Tensor or Memref | `sync_related_args` | variadic of 64-bit signless integer | `unit_flag_cond` | 1-bit signless integer | `per_channel_bias` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result_tensors` | variadic of ranked tensor of any type values ### `hivm.hir.bitcast` (hivm::BitcastOp) _Reinterprets the bits of a shaped value without changing data_ Syntax: ```mlir operation ::= `hivm.hir.bitcast` $src `:` type($src) `->` type($result) attr-dict ``` The `bitcast` operation converts a tensor/memref from one element type to another while preserving the underlying bit representation. The operation requires: 1. Same shape for input and output (2x3 != 3x2) 2. Same total bit-width (element_bitwidth * num_elements) 3. Same memory layout/strides (for memrefs) Traits: `AlwaysSpeculatableImplTrait`, `Elementwise`, `SameOperandsAndResultShape` Interfaces: `ConditionallySpeculatable`, `NoMemoryEffect (MemoryEffectOpInterface)` Effects: `MemoryEffects::Effect{}` #### Operands | Operand | Description | | :-----: | ----------- | | `src` | any type #### Results | Result | Description | | :----: | ----------- | | `result` | any type ### `hivm.hir.convert_layout` (hivm::ConvertLayoutOp) _HIVM layout conversion operation._ Syntax: ```mlir operation ::= `hivm.hir.convert_layout` $source attr-dict `:` functional-type(operands, results) ``` The `convert_layout` operation converts a memref with one layout to another. The data is not copied or modified. Traits: `AlwaysSpeculatableImplTrait`, `SameOperandsAndResultElementType` Interfaces: `ConditionallySpeculatable`, `InferCoreTypeInterface`, `NoMemoryEffect (MemoryEffectOpInterface)`, `ViewLikeOpInterface` Effects: `MemoryEffects::Effect{}` #### Attributes
AttributeMLIR TypeDescription
srcLayout::mlir::hivm::DataLayoutAttr
{{% markdown %}} HIVM data layout mapping attribute. Maps to DOTA_ND, DOTB_ND, DOTC_ND, zN, nZ and ND. - `transpose`: Indicates that the layout is transposed. Only valid and must be present for DOTA_ND and DOTB_ND layout. {{% /markdown %}}
dstLayout::mlir::hivm::DataLayoutAttr
{{% markdown %}} HIVM data layout mapping attribute. Maps to DOTA_ND, DOTB_ND, DOTC_ND, zN, nZ and ND. - `transpose`: Indicates that the layout is transposed. Only valid and must be present for DOTA_ND and DOTB_ND layout. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `source` | ranked or unranked memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | ranked or unranked memref of any type values ### `hivm.hir.copy` (hivm::CopyOp) _HIVM data copy operation_ Syntax: ```mlir operation ::= `hivm.hir.copy` `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` attr-dict (`pad_mode` `=` $pad_mode^)? (`pad_value` `=` $pad_value^ `:` type($pad_value))? (`collapse_reassociation` `=` $collapse_reassociation^)? (`->` type($result_tensor)^)? ``` Copy the data between local memory hierarchies. Currently support: - UB to UB - UB to L1 (for Ascend910_95 series) Examples: ```mlir hivm.hir.copy ins(%src : memref<16x16xf16, #hivm.address_space>) outs(%dst : memref<16x16xf16, #hivm.address_space>) ``` Constraints: - `src` and `dst` are expected to have the same element type. - If `pad_mode` is not set, `src` and `dst` shape should be the same. - Only support left padding. - `pad_value` should have the same element type as `src` and `dst`. ### Non-contiguous reassociative reshape `hivm.hir.copy` also supports copying non-contiguous data to contiguous storage, and vice versa. This can be seen as "expanding" or "collapsing" the data. The `collapse_reassociation` attribute is used to specify which axes are collapsed together. For example: ```mlir hivm.hir.copy ins(%src : memref<32x4xbf16, strided<[16, 1]>>) outs(%dst : memref<32x4xbf16, strided<[4, 1]>>) collapse_reassociation = [[0, 1]] ``` Means that the 0th and 1st axes are collapsed contiguously. Traits: `AlwaysSpeculatableImplTrait`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait` Interfaces: `ConditionallySpeculatable`, `CopyOpInterface`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `InferCoreTypeInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
pad_mode::mlir::hivm::PadModeAttr
{{% markdown %}} HIVM pad mode attribute. {{% /markdown %}}
collapse_reassociation::mlir::ArrayAttrArray of 64-bit integer array attributes
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref | `pad_value` | any type #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | ranked tensor of any type values ### `hivm.hir.create_sync_block_lock` (hivm::CreateSyncBlockLockOp) _Create sync block lock operation._ Syntax: ```mlir operation ::= `hivm.hir.create_sync_block_lock` (`from` $lockArg^)? attr-dict `:` (`from` type($lockArg)^ `to`)? type($memref) ``` The `create_sync_block_lock` operation allocates a region of lock memory, which is used to make the code between lock and unlock execute in order among blocks. Example: ```mlir hivm.hir.create_sync_block_lock() : memref<1xi64> hivm.hir.create_sync_block_lock() from %arg : from memref to memref<1xi64> ``` #### Operands | Operand | Description | | :-----: | ----------- | | `lockArg` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `memref` | memref of any type values ### `hivm.hir.custom` (hivm::CustomOp) _Custom operation is a generic op interface for users to write their own custom implementation. Scenarios: 1. Existing operations could not fulfill the desired functionality. 2. Existing operations could fulfill the functionality, but overall performance is not optimal. 3. Desire for private operation._ General interface for custom op, where: - name : unique op name. Note : there are names reserved for builtins, usually starts with "__builtin". Compiler will link these builtins to self-contained template library, which comes together within bishengir-compile. For normal names/cases, user needs to specify implementation location/compilation commands (TODO), and all the necessary information. Available builtin names: "__builtin_gather_load" - inputs : input parameters. - outputs : output results, designated "init" operands, which act as initial values for the results of the operation or the init locations to which the results of the op will be written. In order to adapt to future enhancements quickly and dynamically, custom op relies on attributes to retrieve necessary information, required information are: - CoreType : which core type to execute on, refer to TCoreTypeAttr. - Pipe : which pipe to execute on, refer to PipeAttr. - VFMode : which mode to run on vector units, refer to VFModeAttr. this attribute is ignored when core type is cube. Note : for builtins, user could specify these information or not, compiler will help to check the correctness and canonicalize. TODO: - Impl : user provided implementation. - Multi Pipe : custom op wants to use multiple pipes, which is a MacroOp in HIVM's context. Traits: `AttrSizedOperandSegments`, `SinglePipeOpTrait` Interfaces: `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `InferCoreTypeInterface`, `MemoryEffectOpInterface (MemoryEffectOpInterface)`, `MemoryEffectsOpInterface`, `OpPipeInterface` Effects: `MemoryEffects::Effect{MemoryEffects::Read on ::mlir::SideEffects::DefaultResource, MemoryEffects::Write on ::mlir::SideEffects::DefaultResource}` #### Attributes
AttributeMLIR TypeDescription
name::mlir::StringAttrstring attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `inputs` | variadic of any type | `outputs` | variadic of any type #### Results | Result | Description | | :----: | ----------- | | `results` | variadic of any type ### `hivm.hir.dcci` (hivm::DCCIOp) _Hivm dcci op_ Syntax: ```mlir operation ::= `hivm.hir.dcci` attr-dict `(` $mode `,` $dataCacheKind (`,` $ptr^ `:` type($ptr))? `)` ``` This op cleans(writes back) and invalidates one cacheline or the entire data cache #### Attributes
AttributeMLIR TypeDescription
mode::mlir::hivm::DCCIModeAttr
hivm dcci mode{{% markdown %}}HIVM DCCI mode attribute.{{% /markdown %}}
dataCacheKind::mlir::hivm::DataCacheKindAttr
hivm data cache kind{{% markdown %}}HIVM data cache kind attribute.{{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `ptr` | memref of any type values ### `hivm.hir.debug` (hivm::DebugOp) _Device-side debugging_ Syntax: ```mlir operation ::= `hivm.hir.debug` attr-dict $arg `:` type($arg) ``` Interfaces: `InferCoreTypeInterface`, `MemoryEffectOpInterface (MemoryEffectOpInterface)` Effects: `MemoryEffects::Effect{MemoryEffects::Read on ::mlir::SideEffects::DefaultResource, MemoryEffects::Write on ::mlir::SideEffects::DefaultResource}` #### Attributes
AttributeMLIR TypeDescription
debugtype::mlir::StringAttrstring attribute
prefix::mlir::StringAttrstring attribute
hex::mlir::BoolAttrbool attribute
tcoretype::mlir::hivm::TCoreTypeAttr
{{% markdown %}} HIVM op core type attribute. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `arg` | integer or floating-point or Tensor or Memref ### `hivm.hir.finish_debug` (hivm::FinishDebugOp) _Finish func for device-side debugging_ Syntax: ```mlir operation ::= `hivm.hir.finish_debug` attr-dict ``` Traits: `CubeVectorCoreTypeTrait` ### `hivm.hir.fixpipe` (hivm::FixpipeOp) _HIVM data copy operation from L0C to other memory hierarchies._ Syntax: ```mlir operation ::= `hivm.hir.fixpipe` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`unit_flag` `[` $unit_flag_mode^ (`,` $unit_flag_cond^)? `]`)? (`->` type($result_tensor)^)? ``` Fixpipe is pipeline that performing data movement from L0C to other memory hierarchies, with on-the-fly fixed function of pre-stage quantization, pre-stage ReLU, element-wise add, post-stage ReLU, post-stage quantization. Currently support: - L0C to OUT - L0C to L1 - L0C to UB (for Ascend910_95 series) Additionally, Fixpipe is also capable of layout transform. Traits: `AlwaysSpeculatableImplTrait`, `CubeCoreTypeTrait`, `OpPipeTrait`, `SinglePipeOpTrait` Interfaces: `ConditionallySpeculatable`, `CopyOpInterface`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `HIVMUnitFlagEnabledInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
enable_nz2nd::mlir::UnitAttrunit attribute
pre_quant::mlir::hivm::FixpipePreQuantModeAttr
HIVM fixpipe pre_quant mode{{% markdown %}}HIVM fixpipe pre_quant mode{{% /markdown %}}
pre_relu::mlir::hivm::FixpipePreReluModeAttr
HIVM fixpipe pre_relu mode{{% markdown %}}HIVM fixpipe pre_relu mode{{% /markdown %}}
channel_split::mlir::BoolAttrbool attribute
unit_flag_mode::mlir::hivm::UnitFlagAttr
{{% markdown %}} HIVM unit flag attribute for synchronization. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | shaped of any type values | `dst` | shaped of any type values | `unit_flag_cond` | 1-bit signless integer #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | ranked tensor of any type values ### `hivm.hir.get_block_idx` (hivm::GetBlockIdxOp) _Get block idx of the current device thread used for parallelization._ Syntax: ```mlir operation ::= `hivm.hir.get_block_idx` attr-dict `->` type($result) ``` This op gets the block idx of the current device thread. This op will be lowered to `GetBlockIdxInstrOp`. Traits: `AlwaysSpeculatableImplTrait`, `CubeVectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `InferTypeOpInterface`, `NoMemoryEffect (MemoryEffectOpInterface)` Effects: `MemoryEffects::Effect{}` #### Results | Result | Description | | :----: | ----------- | | `result` | 64-bit signless integer ### `hivm.hir.get_block_num` (hivm::GetBlockNumOp) _Get block number of the current device thread used for parallelization._ Syntax: ```mlir operation ::= `hivm.hir.get_block_num` attr-dict `->` type($result) ``` This op gets the block number of the current device thread. This op will be lowered to `GetBlockNumInstrOp`. Traits: `AlwaysSpeculatableImplTrait`, `CubeVectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `InferTypeOpInterface`, `NoMemoryEffect (MemoryEffectOpInterface)` Effects: `MemoryEffects::Effect{}` #### Results | Result | Description | | :----: | ----------- | | `result` | 64-bit signless integer ### `hivm.hir.get_sub_block_idx` (hivm::GetSubBlockIdxOp) _Get sub block idx of the current device thread used for parallelization._ Syntax: ```mlir operation ::= `hivm.hir.get_sub_block_idx` attr-dict `->` type($result) ``` This op gets the sub block idx of the current device thread. This op will be lowered to GetSubBlockIdxInstrOp. Traits: `AlwaysSpeculatableImplTrait`, `CubeVectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `InferTypeOpInterface`, `NoMemoryEffect (MemoryEffectOpInterface)` Effects: `MemoryEffects::Effect{}` #### Results | Result | Description | | :----: | ----------- | | `result` | 64-bit signless integer ### `hivm.hir.get_sub_block_num` (hivm::GetSubBlockNumOp) _Get sub block number of the current device thread used for parallelization._ Syntax: ```mlir operation ::= `hivm.hir.get_sub_block_num` attr-dict `->` type($result) ``` This op gets the sub block number of the current device thread. This op will be lowered to GetSubBlockNumInstrOp. Traits: `AlwaysSpeculatableImplTrait`, `CubeVectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `InferTypeOpInterface`, `NoMemoryEffect (MemoryEffectOpInterface)` Effects: `MemoryEffects::Effect{}` #### Results | Result | Description | | :----: | ----------- | | `result` | 64-bit signless integer ### `hivm.hir.get_sys_cnt` (hivm::GetSysCntOp) _Get sys cnt of the current device_ Syntax: ```mlir operation ::= `hivm.hir.get_sys_cnt` attr-dict `->` type($result) ``` This op get the sys cnt of the current device. This op will be lowered to `GetSysCntInstrOp`. Traits: `AlwaysSpeculatableImplTrait`, `CubeVectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `InferTypeOpInterface`, `NoMemoryEffect (MemoryEffectOpInterface)` Effects: `MemoryEffects::Effect{}` #### Results | Result | Description | | :----: | ----------- | | `result` | 64-bit signless integer ### `hivm.hir.init_debug` (hivm::InitDebugOp) _Init func for device-side debugging_ Syntax: ```mlir operation ::= `hivm.hir.init_debug` attr-dict ``` Traits: `CubeVectorCoreTypeTrait` ### `hivm.hir.load` (hivm::LoadOp) _HIVM data load operation_ Syntax: ```mlir operation ::= `hivm.hir.load` `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` attr-dict (`pad_mode` `=` $pad_mode^)? (`pad_value` `=` $pad_value^ `:` type($pad_value))? (`left_padding_num` `=` $left_padding_num^ `:` type($left_padding_num))? (`init_out_buffer` `=` $init_out_buffer^ )? (`right_padding_num` `=` $right_padding_num^ `:` type($right_padding_num))? (`init_condition` `=` $init_condition^ `:` type($init_condition))? (`may_implicit_transpose_with_last_axis` `=` $may_implicit_transpose_with_last_axis^ )? (`->` type($result_tensor)^)? ``` Loads the data from the global memory to the local buffer. Currently only support loading to the unified buffer. Examples: ```mlir hivm.load ins(%src : memref<16x16xf16, #hivm.address_space>) outs(%dst : memref<16x16xf16, #hivm.address_space>) ``` Constraints: - `src` and `dst` are expected to have the same element type. - If `pad_mode` is not set, `src` and `dst` shape should be the same. - Supports both left and right padding. - `pad_value` should have the same element type as `src` and `dst`. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `OpPipeTrait`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait` Interfaces: `BiShengIRAggregatedOpInterface`, `ConditionallySpeculatable`, `CopyOpInterface`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `InferCoreTypeInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
pad_mode::mlir::hivm::PadModeAttr
{{% markdown %}} HIVM pad mode attribute. {{% /markdown %}}
init_out_buffer::mlir::BoolAttrbool attribute
may_implicit_transpose_with_last_axis::mlir::BoolAttrbool attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref | `pad_value` | any type | `left_padding_num` | index | `right_padding_num` | any type | `init_condition` | any type #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | ranked tensor of any type values ### `hivm.hir.load_scalar` (hivm::LoadScalarOp) _Hivm load scalar_ Syntax: ```mlir operation ::= `hivm.hir.load_scalar` attr-dict $addr `:` type($addr) `->` type($result) ``` #### Operands | Operand | Description | | :-----: | ----------- | | `addr` | LLVM pointer type #### Results | Result | Description | | :----: | ----------- | | `result` | integer or floating-point ### `hivm.hir.matmul` (hivm::MatmulOp) _HIVM Matrix Multiply Op with inputs from global memory_ Syntax: ```mlir operation ::= `hivm.hir.matmul` attr-dict `ins` `(` $a `,` $b `:` type($a) `,` type($b) `)` `outs` `(` $c `:` type($c) `)` (`tiling_params` `=` $tilingParams^ `:` type($tilingParams) ) ? (`bias` `=` $bias^ `:` type($bias) )? (`descale` `=` $descale^ `:` type($descale))? (`a_transpose` $aTranspose^)? (`b_transpose` $bTranspose^)? (`descale_mode` `=` $descaleMode^)? (`block_sizes` `(` $blockSizes^ `:` type($blockSizes) `)`)? (`process_sizes` `(` $processSizes^ `:` type($processSizes) `)`)? (`swizzle_offset` `=` $swizzleOffset^ `:` type($swizzleOffset) )? (`swizzle_direction` `=` $swizzleDirection^ `:` type($swizzleDirection))? (`epilogue_p_tiles` `=` $epiloguePTiles^ `:` type($epiloguePTiles))? (`->` type($result)^)? ``` This operation takes three tiled matrices from the global memory as arguments: - `A` (ranked type): an `m x k` matrix - `B` (ranked type): an `k x n` matrix - `C` (ranked type): an `m x n` matrix Other arguments include: - `block_sizes`: data size of m, n, and k dimension processed on the L1 memory hierarchy - `process_sizes`: data size of m, n, and k dimension processed on the L0 memory hierarchy - (optional) `swizzle_offset`: continuous block number which swizzle schedule - (optional) `swizzle_direction`: block direction which swizzle schedule - (optional) `epilogue_p_tiles`: block number which compute attached op once handle The operation performed is represented as `C = A * B`. If `a_transpose` or `b_transpose` is present, the respective operand is loaded in a transposed manner. Optionally, this operation takes the following arguments: - `bias` (ranked type): bias value, which is a vector of shape `n` - `descale`: dequantization value. Support 3 types: - `DescaleNull` : no descale. - `DescalePerChannel`: the shape of `descale` is equal to `n`. - `DescalePerTensor`: the shape of `descale` is equal to `1`. The operation performed is represented as `C = descale * (A * B + bias)`. Traits: `AttrSizedOperandSegments`, `MacroOpPipeTrait`, `MacroOpTrait` Interfaces: `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `InferCoreTypeInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
aTranspose::mlir::UnitAttrunit attribute
bTranspose::mlir::UnitAttrunit attribute
descaleMode::mlir::hivm::DescaleModeAttr
descale mode for matmul{{% markdown %}} HIVM descale mode attribute for matmul op. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `a` | shaped of any type values | `b` | shaped of any type values | `tilingParams` | shaped of any type values | `bias` | shaped of any type values | `descale` | shaped of any type values | `blockSizes` | variadic of 64-bit signless integer | `processSizes` | variadic of 64-bit signless integer | `swizzleOffset` | 64-bit signless integer | `swizzleDirection` | 64-bit signless integer | `epiloguePTiles` | 64-bit signless integer | `c` | shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.mix_group_matmul` (hivm::MixGroupMatmulOp) _HIVM (Mix) Matrix Group Multiply Op with inputs from global memory_ Syntax: ```mlir operation ::= `hivm.hir.mix_group_matmul` attr-dict `ins` `(` $a `,` $b `,` $tokens_per_expert `:` type($a) `,` type($b) `,` type($tokens_per_expert) `)` (`post_vector_func_ins` `(` $postVecFuncIns^ `:` type($postVecFuncIns) `)`) ? (`post_vector_func_outs` `(` $postVecFuncOuts^ `:` type($postVecFuncOuts) `)`) ? (`workspace_ins` `(` $workspaceIns^ `:` type($workspaceIns) `)`) ? `outs` `(` $c `:` type($c) `)` (`tiling_params` `=` $tilingParams^ `:` type($tilingParams) ) ? (`comm_params` `=` $commParams^ `:` type($commParams) ) ? (`bias` `=` $bias^ `:` type($bias) )? (`descale` `=` $descale^ `:` type($descale))? (`a_transpose` $aTranspose^)? (`b_transpose` $bTranspose^)? (`descale_mode` `=` $descaleMode^)? (`block_sizes` `(` $blockSizes^ `:` type($blockSizes) `)`)? (`process_sizes` `(` $processSizes^ `:` type($processSizes) `)`)? (`swizzle_offset` `=` $swizzleOffset^ `:` type($swizzleOffset) )? (`swizzle_direction` `=` $swizzleDirection^ `:` type($swizzleDirection))? (`epilogue_p_tiles` `=` $epiloguePTiles^ `:` type($epiloguePTiles))? (`->` type($result)^)? ``` This operation takes three tiled matrices from the global memory as arguments: - `A` (ranked type): an `m x k` matrix - `B` (ranked type): an `k x n` matrix - `C` (ranked type): an `m x n` matrix Other arguments include: - `block_sizes`: data size of m, n, and k dimension processed on the L1 memory hierarchy - `process_sizes`: data size of m, n, and k dimension processed on the L0 memory hierarchy - (optional) `swizzle_offset`: continuous block number which swizzle schedule - (optional) `swizzle_direction`: block direction which swizzle schedule - (optional) `epilogue_p_tiles`: block number which compute attached op once handle The operation performed is represented as `C = A * B`. If `a_transpose` or `b_transpose` is present, the respective operand is loaded in a transposed manner. Optionally, this operation takes the following arguments: - `bias` (ranked type): bias value, which is a vector of shape `n` - `descale`: dequantization value. Support 3 types: - `DescaleNull` : no descale. - `DescalePerChannel`: the shape of `descale` is equal to `n`. - `DescalePerTensor`: the shape of `descale` is equal to `1`. The operation performed is represented as `C = descale * (A * B + bias)`. This operation also supports tile-level fusion with a post-vector function (hence it's a Mix op) `tokens_per_expert` specify how matmuls are distributed to different experts `post_vector_func_ins` is used to specify the arguments. `post_vector_func_outs` is used to specify the outputs. `comm_params` is used to specify communication related arguments (eg. topology, communicator, group, etc.) when fusing communication operators. Traits: `AttrSizedOperandSegments`, `MacroOpPipeTrait`, `MacroOpTrait` Interfaces: `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `InferCoreTypeInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
aTranspose::mlir::UnitAttrunit attribute
bTranspose::mlir::UnitAttrunit attribute
descaleMode::mlir::hivm::DescaleModeAttr
descale mode for matmul{{% markdown %}} HIVM descale mode attribute for matmul op. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `a` | shaped of any type values | `b` | shaped of any type values | `tokens_per_expert` | shaped of any type values | `postVecFuncIns` | variadic of shaped of any type values | `postVecFuncOuts` | variadic of shaped of any type values | `workspaceIns` | variadic of shaped of any type values | `tilingParams` | shaped of any type values | `commParams` | shaped of any type values | `bias` | shaped of any type values | `descale` | shaped of any type values | `blockSizes` | variadic of 64-bit signless integer | `processSizes` | variadic of 64-bit signless integer | `swizzleOffset` | 64-bit signless integer | `swizzleDirection` | 64-bit signless integer | `epiloguePTiles` | 64-bit signless integer | `c` | shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.mix_matmul` (hivm::MixMatmulOp) _HIVM (Mix) Matrix Multiply Op with inputs from global memory_ Syntax: ```mlir operation ::= `hivm.hir.mix_matmul` attr-dict `ins` `(` $a `,` $b `:` type($a) `,` type($b) `)` (`post_vector_func_ins` `(` $postVecFuncIns^ `:` type($postVecFuncIns) `)`) ? (`workspace_ins` `(` $workspaceIns^ `:` type($workspaceIns) `)`) ? `outs` `(` $c `:` type($c) `)` (`tiling_params` `=` $tilingParams^ `:` type($tilingParams) ) ? (`comm_params` `=` $commParams^ `:` type($commParams) ) ? (`bias` `=` $bias^ `:` type($bias) )? (`descale` `=` $descale^ `:` type($descale))? (`a_transpose` $aTranspose^)? (`b_transpose` $bTranspose^)? (`descale_mode` `=` $descaleMode^)? (`block_sizes` `(` $blockSizes^ `:` type($blockSizes) `)`)? (`process_sizes` `(` $processSizes^ `:` type($processSizes) `)`)? (`swizzle_offset` `=` $swizzleOffset^ `:` type($swizzleOffset) )? (`swizzle_direction` `=` $swizzleDirection^ `:` type($swizzleDirection))? (`epilogue_p_tiles` `=` $epiloguePTiles^ `:` type($epiloguePTiles))? (`->` type($result)^)? ``` This operation takes three tiled matrices from the global memory as arguments: - `A` (ranked type): an `m x k` matrix - `B` (ranked type): an `k x n` matrix - `C` (ranked type): an `m x n` matrix Other arguments include: - `block_sizes`: data size of m, n, and k dimension processed on the L1 memory hierarchy - `process_sizes`: data size of m, n, and k dimension processed on the L0 memory hierarchy - (optional) `swizzle_offset`: continuous block number which swizzle schedule - (optional) `swizzle_direction`: block direction which swizzle schedule - (optional) `epilogue_p_tiles`: block number which compute attached op once handle The operation performed is represented as `C = A * B`. If `a_transpose` or `b_transpose` is present, the respective operand is loaded in a transposed manner. Optionally, this operation takes the following arguments: - `bias` (ranked type): bias value, which is a vector of shape `n` - `descale`: dequantization value. Support 3 types: - `DescaleNull` : no descale. - `DescalePerChannel`: the shape of `descale` is equal to `n`. - `DescalePerTensor`: the shape of `descale` is equal to `1`. The operation performed is represented as `C = descale * (A * B + bias)`. This operation also supports tile-level fusion with a post-vector function (hence it's a Mix op). `post_vector_func_ins` is used to specify the arguments. `comm_params` is used to specify communication related arguments (eg. topology, communicator, group, etc.) when fusing communication operators. Traits: `AttrSizedOperandSegments`, `MacroOpPipeTrait`, `MacroOpTrait` Interfaces: `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `InferCoreTypeInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
aTranspose::mlir::UnitAttrunit attribute
bTranspose::mlir::UnitAttrunit attribute
descaleMode::mlir::hivm::DescaleModeAttr
descale mode for matmul{{% markdown %}} HIVM descale mode attribute for matmul op. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `a` | shaped of any type values | `b` | shaped of any type values | `postVecFuncIns` | variadic of shaped of any type values | `workspaceIns` | variadic of shaped of any type values | `tilingParams` | shaped of any type values | `commParams` | shaped of any type values | `bias` | shaped of any type values | `descale` | shaped of any type values | `blockSizes` | variadic of 64-bit signless integer | `processSizes` | variadic of 64-bit signless integer | `swizzleOffset` | 64-bit signless integer | `swizzleDirection` | 64-bit signless integer | `epiloguePTiles` | 64-bit signless integer | `c` | shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.mmadL1` (hivm::MmadL1Op) _Matrix Multiply and Add Op with inputs from L1 memory hierarchy._ Syntax: ```mlir operation ::= `hivm.hir.mmadL1` attr-dict `ins` `(` $a `,` $b `,` $init_condition `,` $real_m `,` $real_k `,` $real_n (`,` $per_channel_bias^)? `:` type($a) `,` type($b) `,` type($init_condition) `,` type($real_m) `,` type($real_k) `,` type($real_n) (`,` type($per_channel_bias)^)? `)` `outs` `(` $c `:` type($c) `)` (`sync_related_args` `(` $sync_related_args^ `:` type($sync_related_args) `)`)? (`unit_flag` `[` $unit_flag_mode^ (`,` $unit_flag_cond^)? `]`)? (`->` type($result_tensors)^)? ``` The computation logic is: ```text C = C + A x B + (optional) channel_bias ``` Note: the rank of A, B, and C Matrix must be two. Traits: `AttrSizedOperandSegments`, `CubeCoreTypeTrait`, `MacroOpPipeTrait`, `MacroOpTrait` Interfaces: `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `HIVMUnitFlagEnabledInterface`, `MemoryEffectsOpInterface`, `OpLayoutInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
a_transpose::mlir::UnitAttrunit attribute
b_transpose::mlir::UnitAttrunit attribute
enable_HF32::mlir::UnitAttrunit attribute
unit_flag_mode::mlir::hivm::UnitFlagAttr
{{% markdown %}} HIVM unit flag attribute for synchronization. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `a` | Tensor or Memref | `b` | Tensor or Memref | `init_condition` | 1-bit signless integer | `real_m` | index | `real_k` | index | `real_n` | index | `c` | Tensor or Memref | `sync_related_args` | variadic of 64-bit signless integer | `unit_flag_cond` | 1-bit signless integer | `per_channel_bias` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result_tensors` | variadic of ranked tensor of any type values ### `hivm.hir.nd2nz` (hivm::ND2NZOp) _HIVM data copy operation with on-the-fly ND to NZ layout transformation_ Syntax: ```mlir operation ::= `hivm.hir.nd2nz` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`init_out_buffer` `=` $init_out_buffer^ )? (`pad_value` `=` $pad_value^ `:` type($pad_value))? (`init_condition` `=` $init_condition^ `:` type($init_condition))? (`->` type($result_tensor)^)? ``` - `dst_continuous`: if present, signify that the source data is stored continuously in the destination buffer. This must be set in order for this op to be converted to library function call. Constraints: - if `init_out_buffer` is true, `pad_value` should have value. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CubeCoreTypeTrait`, `OpPipeTrait`, `SinglePipeOpTrait` Interfaces: `BiShengIRAggregatedOpInterface`, `ConditionallySpeculatable`, `CopyOpInterface`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
dst_continuous::mlir::UnitAttrunit attribute
init_out_buffer::mlir::BoolAttrbool attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | shaped of any type values | `dst` | shaped of any type values | `pad_value` | any type | `init_condition` | any type #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | variadic of ranked tensor of any type values ### `hivm.hir.nz2nd` (hivm::NZ2NDOp) _HIVM data copy operation from L1 to Global Memory with NZ2ND conversion_ Syntax: ```mlir operation ::= `hivm.hir.nz2nd` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`->` type($result_tensor)^)? ``` NZ2ND does data movement from L1 to OUT with NZ2ND conversion. Traits: `AlwaysSpeculatableImplTrait`, `CubeCoreTypeTrait`, `OpPipeTrait`, `SinglePipeOpTrait` Interfaces: `ConditionallySpeculatable`, `CopyOpInterface`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | ranked tensor of any type values ### `hivm.hir.pipe_barrier` (hivm::PipeBarrierOp) _Hivm pipe barrier._ Syntax: ```mlir operation ::= `hivm.hir.pipe_barrier` `[` $pipe `]` attr-dict ``` Interfaces: `InferCoreTypeInterface` #### Attributes
AttributeMLIR TypeDescription
pipe::mlir::hivm::PipeAttr
{{% markdown %}} HIVM Op pipe attribute. {{% /markdown %}}
### `hivm.hir.pointer_cast` (hivm::PointerCastOp) _HIVM pointer cast op at specific i64 addr_ Syntax: ```mlir operation ::= `hivm.hir.pointer_cast` `(`$addrs `)` (`[` $dynamicSizes^`]`)? attr-dict `:` type($result) ``` The specific i64 addrs are stored in `$addrs`, which is variadic. Constraints: 1. The type of each address should be i64. 2. addrs should have at least one addr. Examples: ```mlir %addr = arith.constant 1234 : i64 %tmp = hivm.hir.pointer_cast(%addr) : memref<32xf32> %addr2 = arith.constant 1600 : i64 %addr3 = arith.constant 3200 : i64 %tmp2 = hivm.hir.pointer_cast(%addr, %addr2) : memref<32xf32> %tmp3 = hivm.hir.pointer_cast(%addr, %addr2, %addr3) : memref<32xf32> ``` Traits: `AttrSizedOperandSegments`, `CubeVectorCoreTypeTrait` #### Operands | Operand | Description | | :-----: | ----------- | | `addrs` | variadic of 64-bit signless integer | `dynamicSizes` | variadic of index #### Results | Result | Description | | :----: | ----------- | | `result` | memref of any type values ### `hivm.hir.set_ffts_base_addr` (hivm::SetFFTSBaseAddrOp) _Set base addr for ffts sync mechanism._ Syntax: ```mlir operation ::= `hivm.hir.set_ffts_base_addr` attr-dict $ffts_base_addr ``` Traits: `CubeVectorCoreTypeTrait` #### Operands | Operand | Description | | :-----: | ----------- | | `ffts_base_addr` | 64-bit signless integer ### `hivm.hir.set_flag` (hivm::SetFlagOp) _Hivm set flag._ Syntax: ```mlir operation ::= `hivm.hir.set_flag` `[` $set_pipe `,` $wait_pipe `,` custom($static_event_id, $dynamic_event_id) `]` attr-dict ``` Interfaces: `InferCoreTypeInterface` #### Attributes
AttributeMLIR TypeDescription
set_pipe::mlir::hivm::PipeAttr
{{% markdown %}} HIVM Op pipe attribute. {{% /markdown %}}
wait_pipe::mlir::hivm::PipeAttr
{{% markdown %}} HIVM Op pipe attribute. {{% /markdown %}}
static_event_id::mlir::hivm::EventAttr
{{% markdown %}} HIVM event attribute for synchronization. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `dynamic_event_id` | 64-bit signless integer ### `hivm.hir.set_mask_norm` (hivm::SetMaskNormOp) _Hivm set mask norm_ Syntax: ```mlir operation ::= `hivm.hir.set_mask_norm` attr-dict ``` ### `hivm.hir.store` (hivm::StoreOp) _HIVM data store operation_ Syntax: ```mlir operation ::= `hivm.hir.store` `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` attr-dict (`atomic` `=` $atomic_kind^)? (`->` type($result_tensor)^)? ``` Stores the data on local buffer to global memory. Currently only support storing data on the unified buffer. Examples: ```mlir hivm.store ins(%src : memref<16x16xf16, #hivm.address_space>) outs(%dst : memref<16x16xf16, #hivm.address_space>) ``` Constraints: - `src` and `dst` are expected to have the same element type. - If `atomic_kind` is set, the kind is one of `add`, `max`, `min`. Traits: `AlwaysSpeculatableImplTrait`, `OpPipeTrait`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait` Interfaces: `ConditionallySpeculatable`, `CopyOpInterface`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `InferCoreTypeInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
atomic_kind::mlir::hivm::AtomicKindAttr
Atomic Operation Kind for StoreOp{{% markdown %}} HIVM atomic store kind attribute. {{% /markdown %}}
may_implicit_transpose_with_last_axis::mlir::BoolAttrbool attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result_tensor` | ranked tensor of any type values ### `hivm.hir.sync_block` (hivm::SyncBlockOp) _Hivm sync block between different kernels._ Syntax: ```mlir operation ::= `hivm.hir.sync_block` attr-dict `[` $sync_block_mode (`,` $flag_id^)?`]` (`ffts_base_addr` `=` $ffts_base_addr^)? (`tcube_pipe` `=` $tcube_pipe^)? (`tvector_pipe` `=` $tvector_pipe^)? ``` There are sync block modes: - ALL_CUBE : All cube are synchronized to a same point. `tcube_pipe` needs to be set to the pipe that the cube core is waiting for. - ALL_VECTOR : All vector are synchronized to a same point. `tvector_pipe` needs to be set to the pipe that the vector core is waiting for. - ALL_SUB_VECTOR : All sub-vector cores are synchronized to a same point. - BARRIER_CUBE : Used for cube-cube synchronization, it's going to be lowered to a barrier.pipe_all and would only be copied the aic kernel. - BARRIER_VECTOR : Used for cube-cube synchronization, it's going to be lowered to a barrier.pipe_all and would only be copied the aiv kernel. - ALL : All aic/aiv are synchronized to same point. `tvector_pipe` needs to be set to the pipe that the vector core is waiting for. Note: - SyncBlockOp can only use after data is moved to gm. - `$ffts_base_addr` must be set in Altas A2/A3. Every time FFTS collect one specific `$flag_id` from all subblocks, FFTS would set the flag ID back to the block in the group to do synchronization. Interfaces: `InferCoreTypeInterface` #### Attributes | Attribute | MLIR Type | Description | |-----------|-----------|-------------| | `sync_block_mode` | ::mlir::hivm::SyncBlockModeAttr | HIVM synchronization block mode attribute. | | `flag_id` | ::mlir::IntegerAttr | An Attribute containing a integer value

Syntax:
`integer-attribute ::= (integer-literal( : (index-type \| integer-type) )?) \| true \| false`

An integer attribute is a literal attribute that represents an integral value of the specified integer or index type. `i1` integer attributes are treated as `boolean` attributes, and use a unique assembly format of either `true` or `false` depending on the value. The default type for non-boolean integer attributes, if a type is not specified, is signless 64-bit integer.

Examples:
`10 : i32`
`10 // : i64 is implied here.`
`true // A bool, i.e. i1, value.`
`false // A bool, i.e. i1, value.` | | `tcube_pipe` | ::mlir::hivm::PipeAttr | HIVM Op pipe attribute. | | `tvector_pipe` | ::mlir::hivm::PipeAttr | HIVM Op pipe attribute. | #### Operands | Operand | Description | | :-----: | ----------- | | `ffts_base_addr` | 64-bit signless integer ### `hivm.hir.sync_block_lock` (hivm::SyncBlockLockOp) _Sync block lock operation._ Syntax: ```mlir operation ::= `hivm.hir.sync_block_lock` attr-dict `lock_var` `(` $lock_var `:` type($lock_var) `)` ``` The sync_block_lock operation will not release until the lock_var equals the block idx. Example: ```mlir hivm.hir.sync_block_lock lock_var(%lock : memref<1xi64>) ``` #### Operands | Operand | Description | | :-----: | ----------- | | `lock_var` | 1D memref of 64-bit signless integer values ### `hivm.hir.sync_block_set` (hivm::SyncBlockSetOp) _Hivm set block sync._ Syntax: ```mlir operation ::= `hivm.hir.sync_block_set` attr-dict `[` $tcore_type `,` $tpipe `,` $pipe`]` `flag` `=` custom($static_flag_id, $dynamic_flag_id) (`ffts_base_addr` `=` $ffts_base_addr^)? (`sync_instr_mode` `=` $tsync_instr_mode^)? ``` Traits: `AttrSizedOperandSegments` Interfaces: `InferCoreTypeInterface` #### Attributes | Attribute | MLIR Type | Description | |-----------|-----------|-------------| | `tcore_type` | ::mlir::hivm::TCoreTypeAttr | HIVM op core type attribute. | | `tpipe` | ::mlir::hivm::PipeAttr | HIVM Op pipe attribute. | | `pipe` | ::mlir::hivm::PipeAttr | HIVM Op pipe attribute. | | `static_flag_id` | ::mlir::IntegerAttr | An Attribute containing a integer value

Syntax:
`integer-attribute ::= (integer-literal ( : (index-type \| integer-type) )?) \| true \| false`

An integer attribute is a literal attribute that represents an integral value of the specified integer or index type. `i1` integer attributes are treated as `boolean` attributes, and use a unique assembly format of either `true` or `false` depending on the value. The default type for non-boolean integer attributes, if a type is not specified, is signless 64-bit integer.

Examples:
`10 : i32`
`10 // : i64 is implied here.`
`true // A bool, i.e. i1, value.`
`false // A bool, i.e. i1, value.` | | `tsync_instr_mode` | ::mlir::hivm::SyncBlockInstrModeAttr | HIVM synchronization block instruction mode attribute. | #### Operands | Operand | Description | | :-----: | ----------- | | `dynamic_flag_id` | 64-bit signless integer | `ffts_base_addr` | 64-bit signless integer ### `hivm.hir.sync_block_unlock` (hivm::SyncBlockUnlockOp) _Sync block unlock operation._ Syntax: ```mlir operation ::= `hivm.hir.sync_block_unlock` attr-dict `lock_var` `(` $lock_var `:` type($lock_var) `)` ``` The `sync_block_lock` operation will increase and release the lock_var. Example: ```mlir hivm.hir.sync_block_unlock lock_var(%lock : memref<1xi64>) ``` #### Operands | Operand | Description | | :-----: | ----------- | | `lock_var` | 1D memref of 64-bit signless integer values ### `hivm.hir.sync_block_wait` (hivm::SyncBlockWaitOp) _Hivm wait block sync._ Syntax: ```mlir operation ::= `hivm.hir.sync_block_wait` attr-dict `[` $tcore_type `,` $tpipe `,` $pipe`]` `flag` `=` custom($static_flag_id, $dynamic_flag_id) ``` Interfaces: `InferCoreTypeInterface` #### Attributes | Attribute | MLIR Type | Description | |-----------|-----------|-------------| | `tcore_type` | ::mlir::hivm::TCoreTypeAttr | HIVM op core type attribute. | | `tpipe` | ::mlir::hivm::PipeAttr | HIVM Op pipe attribute. | | `pipe` | ::mlir::hivm::PipeAttr | HIVM Op pipe attribute. | | `static_flag_id` | ::mlir::IntegerAttr | An Attribute containing a integer value

Syntax:
`integer-attribute ::= (integer-literal ( : (index-type \| integer-type) )?) \| true \| false`

An integer attribute is a literal attribute that represents an integral value of the specified integer or index type. `i1` integer attributes are treated as `boolean` attributes, and use a unique assembly format of either `true` or `false` depending on the value. The default type for non-boolean integer attributes, if a type is not specified, is signless 64-bit integer.

Examples:
`10 : i32`
`10 // : i64 is implied here.`
`true // A bool, i.e. i1, value.`
`false // A bool, i.e. i1, value.` | #### Operands | Operand | Description | | :-----: | ----------- | | `dynamic_flag_id` | 64-bit signless integer ### `hivm.hir.vabs` (hivm::VAbsOp) _Elementwise Vector Absolute Value Op_ Syntax: ```mlir operation ::= `hivm.hir.vabs` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vadd` (hivm::VAddOp) _Elementwise Binary Vector Addition Op_ Syntax: ```mlir operation ::= `hivm.hir.vadd` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support both Vector-Vector and Vector-Scalar operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `CommutativeOpTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vand` (hivm::VAndOp) _Elementwise Binary Vector And Op_ Syntax: ```mlir operation ::= `hivm.hir.vand` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support only Vector-Vector operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `CommutativeOpTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>`, `VectorOnlyTrait<1>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.varange` (hivm::VArangeOp) _Vector Arange Op_ Syntax: ```mlir operation ::= `hivm.hir.varange` attr-dict (`offset` `[` $offset^ `]`)? `strides` `[` $strides `]` `outs` `(` $dst `:` type($dst) `)` (`->` type($result)^)? ``` Fill a vector with range 0,1,2... based on strides and offset. e.g. offset = 1, strides = [1, 2], tensor/memref shape = [2x4xi32], the result is [[1, 3, 5, 7, 2, 4, 6, 8]]. Constraints: 1. Must have at least one stride. 2. Default offset is 0. Examples: ```mlir hivm.hir.varange offset[%o] strides[%s0, %s1] outs(%dst : memref<32xf32>) %result = hivm.hir.varange offset[%o] strides[%s0, %s1] outs(%dst : tensor<32xf32>) -> tensor<32xf32> ``` Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `OpPipeTrait`, `SinglePipeOpTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Operands | Operand | Description | | :-----: | ----------- | | `dst` | Tensor or Memref | `offset` | index | `strides` | variadic of index #### Results | Result | Description | | :----: | ----------- | | `result` | ranked tensor of any type values ### `hivm.hir.vbrc` (hivm::VBrcOp) _Vector Broadcast Op_ Syntax: ```mlir operation ::= `hivm.hir.vbrc` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast_dims` `=` $broadcast_dims^)? (`->` type($result)^)? ``` Broadcast a vector or a scalar according to the broadcast axes array. Constraints: 1. The input vector and output vector must have same rank and the same element type. 2. For the input operand, the size of the broadcasted axis must be 1. 3. The broadcast indices array cannot be empty for vector input. 4. The broadcast indices array *must* be empty for scalar input. 5. The broadcast indices array can not be larger than the ranks of the input vector. 6. The broadcast indices must be in `[0, RankOfSrcVec)`. 7. For i1 type, need to ensure that the tail axis of dst is aligned with 16, otherwise there will be a risk of memory stampede Examples: ```mlir // Scalar broadcast hivm.hir.vbrc ins(%src : i32) outs(%dst : memref) // Vector broadcast hivm.hir.vbrc ins(%src : memref<1xi32>) outs(%dst : memref) broadcast_dims = [0] %result = hivm.hir.vbrc ins(%src : tensor<1xi32>) outs(%dst : tensor) broadcast_dims = [0] -> tensor ``` Traits: `AlwaysSpeculatableImplTrait`, `CollapsibleConsecutiveTargetDimsTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait` Interfaces: `BiShengIRAggregatedOpInterface`, `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `InferCoreTypeInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
broadcast_dims::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | any type | `dst` | Tensor or Memref | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vcast` (hivm::VCastOp) _Elementwise Vector Type Conversion Op_ Syntax: ```mlir operation ::= `hivm.hir.vcast` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`round_mode` `=` $round_mode^)? (`cast` `=` $cast^)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. Supports the following conversions: | src | dst | roundingmode | |------|------|---------------------------------------------------| | f32 | f32 | round, rint, floor, ceil, trunc | | f32 | f16 | round, rint, floor, ceil, trunc, odd | | f32 | i64 | round, rint, floor, ceil, trunc | | f32 | i32 | round, rint, floor, ceil, trunc | | f32 | i16 | round, rint, floor, ceil, trunc | | f32 | s64 | round, rint, floor, ceil, trunc | | f32 | bf16 | round, rint, floor, ceil, trunc | | f16 | f32 | rint | | f16 | i32 | round, rint, floor, ceil, trunc | | f16 | i16 | round, rint, floor, ceil, trunc | | f16 | i8 | round, rint, floor, ceil, trunc | | f16 | ui8 | round, rint, floor, ceil, trunc | | f16 | i4 | round, rint, floor, ceil, trunc | | bf16 | f32 | rint | | bf16 | i32 | round, rint, floor, ceil, trunc | | ui8 | f16 | rint | | i8 | f16 | rint | | i8 | i1 | rint | | i16 | f16 | round, rint, floor, ceil, trunc | | i16 | f32 | rint | | i32 | f32 | round, rint, floor, ceil, trunc | | i32 | i64 | rint | | i32 | i16 | rint | | i64 | i32 | rint | | i64 | f32 | round, rint, floor, ceil, trunc | | i4 | f16 | rint | | i1 | f16 | rint | | i1 | f32 | rint | Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
round_mode::mlir::hivm::RoundModeAttr
Round Mode for VCastOp{{% markdown %}} - RINT: round to nearest, tie to even (c language rint) - ROUND: round to nearest, tie away from zero (c language round) - FLOOR: round to minus infinity (c language floor) - CEIL: round to positive infinity (c language ceil) - TRUNC: round to zero (c language trunc) - ODD: round to odd (Von Neumann rounding) {{% /markdown %}}
cast::mlir::hivm::TypeFnAttr
Cast for VCastOp{{% markdown %}} HIVM cast attribute. {{% /markdown %}}
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vcmp` (hivm::VCmpOp) _Elementwise Binary Vector Comparison Op_ Syntax: ```mlir operation ::= `hivm.hir.vcmp` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`compare_mode` `=` $compare_mode^)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Compare elements from two source vector. If the comparison result is true, the corresponding bit of `dst` is 1 or 8. Additional constraints: 1. The input vectors and output vector must have the same ranks 2. The element type of `dst` must be bool 3. The input is vector-only. 4. Supports the following data type: | compare mode | element type | |-------------------|-------------------------| | GE/GT/LE/LT/NE/EQ | f16, f32, i16, i32, i64 | Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
compare_mode::mlir::hivm::CompareModeAttr
Compare Mode for VCmpOp{{% markdown %}} HIVM compare mode attribute. {{% /markdown %}}
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vconcat` (hivm::VConcatOp) _Vector Concatenation Op_ Syntax: ```mlir operation ::= `hivm.hir.vconcat` `dim` `(` $dim `)` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`->` type($result)^)? ``` The concat operation constructs a tensor out of a variadic list of input tensors, concatenated along a static dimension number. All inputs and the result type must share the same rank. `dim` specifies the dimension along which to concatenate. The size of the concatenated dimension in the result must be equal to the sum of the sizes of the inputs along that dimension. All other dimensions in both the inputs and result must be the same size. Example: ```mlir hivm.hir.vconcat dim(1) ins(%0, %1 : tensor<136x2048xf32>, tensor<136x2048xf32>) outs(%2 : tensor<136x4096xf32>) -> tensor<136x4096xf32> ``` Traits: `AlwaysSpeculatableImplTrait`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `BiShengIRAggregatedOpInterface`, `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
dim::mlir::IntegerAttr64-bit signless integer attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vcos` (hivm::VCosOp) _Elementwise Vector Cosine Op_ Syntax: ```mlir operation ::= `hivm.hir.vcos` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vcumprod` (hivm::VCumprodOp) _Vector Cumprod Op_ Syntax: ```mlir operation ::= `hivm.hir.vcumprod` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` `cum_dims` `=` $cum_dims `reverse` `=` $reverse (`->` type($result)^)? ``` Calculate the cumulative product of each element along the specified axis of `src`. Each element along the specified axis in the output of cumprod contains the product of all elements from the first element to the current position in the original `src`. Constraints: 1. The input vector and output vector must have the same rank and the same element type. Arguments: * `src`: the tensor/memref from which to calculate the cumulative sum * `dst`: the tensor/memref to store elements * `cum_dims`: specifies the dimension along which to calculate the cumulative product. * `reverse`: specifies the direction of the cumulative product. Examples: ```mlir hivm.hir.vcumprod ins(%src : memref) outs(%dst : memref) cum_dims : [0] reverse = true %result = hivm.hir.vcumprod ins(%src : tensor) outs(%dst : tensor) cum_dims : [0] reverse = true -> tensor ``` Traits: `AlwaysSpeculatableImplTrait`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
cum_dims::mlir::DenseI64ArrayAttri64 dense array attribute should be in increasing order
reverse::mlir::BoolAttrbool attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vcumsum` (hivm::VCumsumOp) _Vector Cumsum Op_ Syntax: ```mlir operation ::= `hivm.hir.vcumsum` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` `cum_dims` `=` $cum_dims `reverse` `=` $reverse (`->` type($result)^)? ``` Calculate the cumulative sum of each element along the specified axis of `src`. Each element along the specified axis in the output of cumsum contains the sum of all elements from the first element to the current position in the original `src`. Constraints: 1. The input vector and output vector must have the same rank and the same element type. Arguments: * `src`: the tensor/memref from which to calculate the cumulative sum * `dst`: the tensor/memref to store elements * `cum_dims`: specifies the dimension along which to calculate the cumulative sum. * `reverse`: specifies the direction of the cumulative sum. Examples: ```mlir hivm.hir.vcumsum ins(%src : memref) outs(%dst : memref) cum_dims : [0] reverse = true %result = hivm.hir.vcumsum ins(%src : tensor) outs(%dst : tensor) cum_dims : [0] reverse = true -> tensor ``` Traits: `AlwaysSpeculatableImplTrait`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
cum_dims::mlir::DenseI64ArrayAttri64 dense array attribute should be in increasing order
reverse::mlir::BoolAttrbool attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vdeinterleave` (hivm::VDeinterleaveOp) _Vector Deinterleave Op_ Syntax: ```mlir operation ::= `hivm.hir.vdeinterleave` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`channel_num` `=` $channel_num^)? (`index_mode` `=` $index_mode^)? (`->` type($result)^)? ``` Deinterleave one tensor along the last dimension. The tensor's last dimension size must be multiple of `channel_num`. Traits: `AlwaysSpeculatableImplTrait`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `BiShengIRAggregatedOpInterface`, `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
channel_num::mlir::IntegerAttr64-bit signless integer attribute
index_mode::mlir::hivm::DeinterleaveModeAttr
HIVM deinterleave mode{{% markdown %}}HIVM deinterleave index mode{{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | variadic of Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vdiv` (hivm::VDivOp) _Elementwise Binary Vector Division Op_ Syntax: ```mlir operation ::= `hivm.hir.vdiv` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support only Vector-Vector operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.verf` (hivm::VErfOp) _Elementwise Vector Error function Op_ Syntax: ```mlir operation ::= `hivm.hir.verf` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vexp` (hivm::VExpOp) _Elementwise Vector Exponential Op_ Syntax: ```mlir operation ::= `hivm.hir.vexp` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vflip` (hivm::VFlipOp) _Vector Flip Op_ Syntax: ```mlir operation ::= `hivm.hir.vflip` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` `flip_axis` `=` $flip_axis (`->` type($result)^)? ``` Flips a tensor along the last dimension. Traits: `AlwaysSpeculatableImplTrait`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
flip_axis::mlir::IntegerAttr64-bit signless integer attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vgather` (hivm::VGatherOp) _Vector Gather Op_ Syntax: ```mlir operation ::= `hivm.hir.vgather` attr-dict `ins` `(` $src `:` type($src) `)` `indices` `(` $indices `:` type($indices) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`->` type($result)^)? ``` Retrieve elements from a tensor/memref according to given indices, and store these elements in another tensor/memref. The gather axis is the last dimension. Arguments: * `src`: the tensor/memref from which to gather elements * `indices`: the indices of elements to gather from src * `dst`: the tensor/memref to store elements * `temp_buffer`: extra memory required by gather op Traits: `AlwaysSpeculatableImplTrait`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `indices` | Tensor or Memref | `dst` | Tensor or Memref | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vinterleave` (hivm::VInterleaveOp) _Vector Interleave Op_ Syntax: ```mlir operation ::= `hivm.hir.vinterleave` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` `interleave_channel_nums` `=` $interleave_channel_nums (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`->` type($result)^)? ``` Interleaves the values of `N` tensors along their last dimension. All tensors must have the same shape. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
interleave_channel_nums::mlir::IntegerAttr64-bit signless integer attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | Tensor or Memref | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vln` (hivm::VLnOp) _Elementwise Vector Natural Logarithm Op_ Syntax: ```mlir operation ::= `hivm.hir.vln` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vmax` (hivm::VMaxOp) _Elementwise Binary Vector Maximum Op_ Syntax: ```mlir operation ::= `hivm.hir.vmax` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support both Vector-Vector and Vector-Scalar operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `CommutativeOpTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vmin` (hivm::VMinOp) _Elementwise Binary Vector Minimum Op_ Syntax: ```mlir operation ::= `hivm.hir.vmin` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support both Vector-Vector and Vector-Scalar operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `CommutativeOpTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vmod` (hivm::VModOp) _Elementwise Vector Mod Op_ Syntax: ```mlir operation ::= `hivm.hir.vmod` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vmul` (hivm::VMulOp) _Elementwise Binary Vector Multiplication Op_ Syntax: ```mlir operation ::= `hivm.hir.vmul` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support both Vector-Vector and Vector-Scalar operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `CommutativeOpTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vmulext` (hivm::VMulExtOp) _Elementwise Binary Vector Multiplication that Calculates the Most Significant 32-bits._ Syntax: ```mlir operation ::= `hivm.hir.vmulext` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support Vector-Vector operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vmulextended` (hivm::VMulextendedOp) _Vector Mulextended Op_ Syntax: ```mlir operation ::= `hivm.hir.vmulextended` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`->` type($result)^)? ``` Do vmul on two tensors. Get both high and low 16-bits. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of Tensor or Memref | `dst` | variadic of Tensor or Memref | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vnot` (hivm::VNotOp) _Elementwise Vector Not Op_ Syntax: ```mlir operation ::= `hivm.hir.vnot` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vor` (hivm::VOrOp) _Elementwise Binary Vector Or Op_ Syntax: ```mlir operation ::= `hivm.hir.vor` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support only Vector-Vector operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `CommutativeOpTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>`, `VectorOnlyTrait<1>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vpad` (hivm::VPadOp) _Vector Pad Op_ Syntax: ```mlir operation ::= `hivm.hir.vpad` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` `low` `` custom($low, $static_low) `high` `` custom($high, $static_high) `pad_value` $pad_value `:` type($pad_value) (`->` type($result)^)? ``` Pads the input operand. Operation semantic is similar to `tensor.pad`. Arguments: * `src`: the tensor/memref on which to pad values * `dst`: reserved for bufferization * `pad_value`: the value to pad * `low`: the padding lengths along the start of each dimension * `high`: the padding lengths along the end of each dimension Example: ```mlir hivm.hir.vpad ins(%src : tensor<2x16xf32>) outs(%dst: tensor) low[%first_dim_low, 0] high[%first_dim_high, 0] pad_value %pad_value : f32 -> tensor ``` Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `BiShengIRAggregatedOpInterface`, `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
static_low::mlir::DenseI64ArrayAttri64 dense array attribute
static_high::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref | `pad_value` | any type | `low` | variadic of index | `high` | variadic of index #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vpow` (hivm::VPowOp) _Elementwise Binary Vector Power Op_ Syntax: ```mlir operation ::= `hivm.hir.vpow` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support both Vector-Vector and Vector-Scalar operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>`, `VectorOnlyTrait<1>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vrec` (hivm::VRecOp) _Elementwise Vector Reciprocal Op_ Syntax: ```mlir operation ::= `hivm.hir.vrec` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vreduce` (hivm::VReduceOp) _Vector Reduction Op_ Syntax: ```mlir operation ::= `hivm.hir.vreduce` attr-dict $arith `ins` `(` $src `:` type($src) `)` (`indices` `(` $indices^ `:` type($indices) `)`)? `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? `reduce_dims` `=` $reduce_dims (`->` type($result)^)? ``` Reduce one or more axes of the source vector according to the reduction axes array, starting from an init value. Constraints: 1. The input vector and output vector must have the same rank and the same element type. 2. For the output operand, the size of the reduced axis must be 1. 3. The reduction indices array can not be empty, nor can be larger than the ranks of the input vector. 4. The reduced indices must be in `[0, RankOfDstVec)`. Examples: ```mlir hivm.hir.vreduce ins(%src : memref) outs(%dst : memref<1xf32>) reduce_dims : [1] %result = hivm.hir.vreduce ins(%src : tensor) outs(%dst : tensor<1xf32>) reduce_dims : [0] -> tensor<1xf32> ``` Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `OpPipeTrait`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `BiShengIRAggregatedOpInterface`, `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
arith::mlir::hivm::ReduceOpAttr
{{% markdown %}} HIVM reduction arithmetic operation attribute. {{% /markdown %}}
reduce_dims::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | variadic of Tensor or Memref | `temp_buffer` | memref of any type values | `indices` | Tensor or Memref #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vrelu` (hivm::VReluOp) _Elementwise Vector Rectified Linear Unit Op_ Syntax: ```mlir operation ::= `hivm.hir.vrelu` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vrsqrt` (hivm::VRsqrtOp) _Elementwise Vector Reciprocal Square Root Op_ Syntax: ```mlir operation ::= `hivm.hir.vrsqrt` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vsel` (hivm::VSelOp) _Elementwise Vector Selection Op_ Syntax: ```mlir operation ::= `hivm.hir.vsel` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Select elements from two source vector according to the binary `condition` vector. If the corresponding bit of the indicator is 1, select `src0`. Otherwise, select `src1`. Additional constraints: 1. The input vectors and output vector must have the same ranks. 2. The element type of indicator vector must be bool. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<3>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vshl` (hivm::VShLOp) _Elementwise Binary Vector Shift Left Op_ Syntax: ```mlir operation ::= `hivm.hir.vshl` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input vector and result have the same element type. 2. Support only Vector - Scalar operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `ScalarOnlyHWTrait<1>`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vshr` (hivm::VShROp) _Elementwise Binary Vector Shift Right Op_ Syntax: ```mlir operation ::= `hivm.hir.vshr` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`round` `:` $round^ )? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input vector and result have the same element type. 2. Support only Vector - Scalar operation. 3. If `round` is set to true, rounding is applied during arithmetic shift right. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `ScalarOnlyHWTrait<1>`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
round::mlir::BoolAttrbool attribute
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vsin` (hivm::VSinOp) _Elementwise Vector Sine Op_ Syntax: ```mlir operation ::= `hivm.hir.vsin` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vsort` (hivm::VSortOp) _Vector Sort Op_ Syntax: ```mlir operation ::= `hivm.hir.vsort` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` `descending` `=` $descending `sort_axis` `=` $sort_axis (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`->` type($result)^)? ``` Sort the sorting axis of `src` in ascending or descending order, and output the sorted value and the index corresponding to the value. Constraints: 1. The input vector and output vector must have the same rank. 2. Currently only tail axis sorting is supported. Arguments: * `src`: the tensor/memref from which to be sorted * `dst_value`: the tensor/memref to store the sorted value * `dst_index`: the tensor/memref to store the index corresponding to dst_value * `descending`: determines whether to sort in ascending or descending order. The default is false, which means ascending order * `sort_axis`: Axis to be sorted Examples: ```mlir hivm.hir.vsort ins(%src : memref) outs(%dst : memref) descending = true sort_axis = 0 %result = hivm.hir.vsort ins(%src : tensor) outs(%dst : tensor) descending = true sort_axis = 0 -> tensor ``` Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `OpPipeTrait`, `SinglePipeOpTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
descending::mlir::BoolAttrbool attribute
sort_axis::mlir::IntegerAttr64-bit signless integer attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | variadic of Tensor or Memref | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vsqrt` (hivm::VSqrtOp) _Elementwise Vector Square Root Op_ Syntax: ```mlir operation ::= `hivm.hir.vsqrt` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vsub` (hivm::VSubOp) _Elementwise Binary Vector Subtraction Op_ Syntax: ```mlir operation ::= `hivm.hir.vsub` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support both Vector-Vector and Vector-Scalar operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `BroadcastableOTF`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `ImplByScalarOpInterface`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vtanh` (hivm::VTanhOp) _Elementwise Vector Hyperbolic Tangent Op_ Syntax: ```mlir operation ::= `hivm.hir.vtanh` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<1>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of any type | `dst` | variadic of shaped of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vtranspose` (hivm::VTransposeOp) _Vector Transpose Op_ Syntax: ```mlir operation ::= `hivm.hir.vtranspose` attr-dict `ins` `(` $src `:` type($src) `)` `outs` `(` $dst `:` type($dst) `)` (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`permutation` `=` $permutation^)? (`disable_align` `=` $disable_align^)? (`->` type($result)^)? ``` Permutes the dimensions of 'src' according to the given `permutation`. In other words: `dim(dst, i) = dim(src, permutation[i])`. Constraints: 1. The input vector and output vector must have same rank, and the same element type. Examples: ```mlir hivm.hir.vtranspose ins(%src : memref<32x8xf32>) outs(%dst : memref<8x32xf32>) permutation = [1, 0] %result = hivm.hir.vtranspose ins(%src : tensor<32x8xf32>) outs(%dst: tensor<8x32xf32>) permutation = [1, 0] -> tensor<8x32xf32> ``` Traits: `AlwaysSpeculatableImplTrait`, `OpPipeTrait`, `SinglePipeOpTrait`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait` Interfaces: `BiShengIRAggregatedOpInterface`, `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
permutation::mlir::DenseI64ArrayAttri64 dense array attribute
disable_align::mlir::BoolAttrbool attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | Tensor or Memref | `dst` | Tensor or Memref | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.vxor` (hivm::VXorOp) _Elementwise Binary Vector Xor Op_ Syntax: ```mlir operation ::= `hivm.hir.vxor` attr-dict (`ins` `(` $src^ `:` type($src) `)`)? (`outs` `(` $dst^ `:` type($dst) `)`)? (`temp_buffer` `(` $temp_buffer^ `:` type($temp_buffer) `)`)? (`broadcast` `=` $broadcast^)? (`transpose` `=` $transpose^)? (`->` type($result)^)? ``` *From the Elementwise Nary Vector Op template:* This operation performs element-wise operation on N operands and produces a single result. It may perform either transpose or broadcast along the way (but not both). Common constraints: 1. Follows DestinationStyleOpInterface. 2. The number of input operands is N; the number of output/result is one. 3. The input/init operands and result have the same rank. 4. The first input is vector-only. Additional constraints: 1. The input/init operands and result have the same element type. 2. Support only Vector-Vector operation. Traits: `AlwaysSpeculatableImplTrait`, `AttrSizedOperandSegments`, `CollapsibleConsecutiveTargetDimsTrait`, `ElementwiseNaryOpTrait<2>`, `HIVMOpSameOperandsAndResultRank`, `OpPipeTrait`, `SameOperandsElementType`, `SinglePipeOpTrait`, `TransposableOTF`, `UniformReassociationFlattenTrait`, `VectorCoreTypeTrait`, `VectorOnlyTrait<0>`, `VectorOnlyTrait<1>` Interfaces: `ConditionallySpeculatable`, `DestinationStyleOpInterface`, `ExtraBufferOpInterface`, `FlattenInterface`, `HIVMCoreTypeInterface`, `HIVMStructuredOpInterface`, `HIVMStructuredOp`, `MemoryEffectsOpInterface`, `OpPipeInterface` #### Attributes
AttributeMLIR TypeDescription
transpose::mlir::DenseI64ArrayAttri64 dense array attribute
broadcast::mlir::DenseI64ArrayAttri64 dense array attribute
#### Operands | Operand | Description | | :-----: | ----------- | | `src` | variadic of shaped of any type values | `dst` | variadic of shaped of any type values | `temp_buffer` | memref of any type values #### Results | Result | Description | | :----: | ----------- | | `result` | variadic of ranked tensor of any type values ### `hivm.hir.wait_flag` (hivm::WaitFlagOp) _Hivm wait flag._ Syntax: ```mlir operation ::= `hivm.hir.wait_flag` `[` $set_pipe `,` $wait_pipe `,` custom($static_event_id, $dynamic_event_id) `]` attr-dict ``` Interfaces: `InferCoreTypeInterface` #### Attributes
AttributeMLIR TypeDescription
set_pipe::mlir::hivm::PipeAttr
{{% markdown %}} HIVM Op pipe attribute. {{% /markdown %}}
wait_pipe::mlir::hivm::PipeAttr
{{% markdown %}} HIVM Op pipe attribute. {{% /markdown %}}
static_event_id::mlir::hivm::EventAttr
{{% markdown %}} HIVM event attribute for synchronization. {{% /markdown %}}
#### Operands | Operand | Description | | :-----: | ----------- | | `dynamic_event_id` | 64-bit signless integer ## Attributes ### AddressSpaceAttr Syntax: ```mlir #hivm.address_space< ::mlir::hivm::AddressSpace # address_space > ``` HIVM address space mapping attribute. Maps to GM, L1, L0A, L0B, L0C and UB. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | address_space | `::mlir::hivm::AddressSpace` | an enum of type AddressSpace | ### AlignKindAttr alignment kind information Syntax: ```mlir #hivm.align_kind< ::mlir::hivm::AlignKind # value > ``` HIVM alignment kind attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::AlignKind` | an enum of type AlignKind | ### AllocAlignDimsAttr Syntax: `#hivm.alloc_align_dims` HIVM alloc align dims. ### AllocAlignValueInByteAttr Syntax: `#hivm.alloc_align_value_in_byte` HIVM alloc align value in bytes. ### AtomicKindAttr Atomic Operation Kind for StoreOp Syntax: ```mlir #hivm.atomic_kind< ::mlir::hivm::AtomicKind # value > ``` HIVM atomic store kind attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::AtomicKind` | an enum of type AtomicKind | ### AxisKindAttr hivm op axis kind information Syntax: ```mlir #hivm.axis_kind< ::mlir::hivm::AxisKind # value > ``` HIVM op axis kind attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::AxisKind` | an enum of type AxisKind | ### HIVMBlockMappingAttr Syntax: ```mlir #hivm.block< std::optional # order > ``` #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | order | `std::optional` | | ### CompareModeAttr Compare Mode for VCmpOp Syntax: ```mlir #hivm.compare_mode< ::mlir::hivm::CompareMode # value > ``` HIVM compare mode attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::CompareMode` | an enum of type CompareMode | ### DCCIModeAttr hivm dcci mode Syntax: ```mlir #hivm.DCCIMode< ::mlir::hivm::DCCIMode # value > ``` HIVM DCCI mode attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::DCCIMode` | an enum of type DCCIMode | ### DataCacheKindAttr hivm data cache kind Syntax: ```mlir #hivm.DataCacheKind< ::mlir::hivm::DataCacheKind # value > ``` HIVM data cache kind attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::DataCacheKind` | an enum of type DataCacheKind | ### DataLayoutAttr Syntax: ```mlir #hivm.data_layout< ::mlir::hivm::DataLayout, # data_layout std::optional, # transpose std::optional # fractalSizes > ``` HIVM data layout mapping attribute. Maps to DOTA_ND, DOTB_ND, DOTC_ND, zN, nZ and ND. - `transpose`: Indicates that the layout is transposed. Only valid and must be present for DOTA_ND and DOTB_ND layout. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | data_layout | `::mlir::hivm::DataLayout` | an enum of type DataLayout | | transpose | `std::optional` | | | fractalSizes | `std::optional` | | ### DeinterleaveModeAttr HIVM deinterleave mode Syntax: ```mlir #hivm.deinterleave_mode< ::mlir::hivm::DeinterleaveMode # value > ``` HIVM deinterleave index mode #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::DeinterleaveMode` | an enum of type DeinterleaveMode | ### DescaleModeAttr descale mode for matmul Syntax: ```mlir #hivm.descale_mode< ::mlir::hivm::DescaleMode # value > ``` HIVM descale mode attribute for matmul op. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::DescaleMode` | an enum of type DescaleMode | ### DisableAutoInjectBlockSyncAttr Syntax: `#hivm.disable_auto_inject_block_sync` Disable auto inject block sync, skip block sync injection. ### EventAttr Syntax: ```mlir #hivm.event< ::mlir::hivm::EVENT # event > ``` HIVM event attribute for synchronization. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | event | `::mlir::hivm::EVENT` | an enum of type EVENT | ### FixpipePreQuantModeAttr HIVM fixpipe pre_quant mode Syntax: ```mlir #hivm.fixpipe_pre_quant_mode< ::mlir::hivm::FixpipePreQuantMode # value > ``` HIVM fixpipe pre_quant mode #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::FixpipePreQuantMode` | an enum of type FixpipePreQuantMode | ### FixpipePreReluModeAttr HIVM fixpipe pre_relu mode Syntax: ```mlir #hivm.fixpipe_pre_relu_mode< ::mlir::hivm::FixpipePreReluMode # value > ``` HIVM fixpipe pre_relu mode #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::FixpipePreReluMode` | an enum of type FixpipePreReluMode | ### HIVMFuncDynMemrefArgsAttr Syntax: `#hivm.func_dyn_memref_args` HIVM FuncDynMemrefArgs to mark the index array of dynamic memref arguments of function. ### InsertSliceSourceIndexAttr Syntax: `#hivm.insert_slice_source_index` Specifies which operand is insert_slice source in vconcat op ### MultiBufferAttr Syntax: `#hivm.multi_buffer` HIVM multi-buffer attribute. ### PadModeAttr Syntax: ```mlir #hivm.padmode< ::mlir::hivm::PadMode # padmode > ``` HIVM pad mode attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | padmode | `::mlir::hivm::PadMode` | an enum of type PadMode | ### ParallelLoopAttr Syntax: `#hivm.parallel_loop` Specifies that marked loop can run in parallel. ### PipeAttr Syntax: ```mlir #hivm.pipe< ::mlir::hivm::PIPE # pipe > ``` HIVM Op pipe attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | pipe | `::mlir::hivm::PIPE` | an enum of type PIPE | ### ReduceOpAttr Syntax: ```mlir #hivm.reduce_op< ::mlir::hivm::ReduceOperation # reduce_op > ``` HIVM reduction arithmetic operation attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | reduce_op | `::mlir::hivm::ReduceOperation` | an enum of type ReduceOperation | ### RoundModeAttr Round Mode for VCastOp Syntax: ```mlir #hivm.round_mode< ::mlir::hivm::RoundMode # value > ``` - RINT: round to nearest, tie to even (c language rint) - ROUND: round to nearest, tie away from zero (c language round) - FLOOR: round to minus infinity (c language floor) - CEIL: round to positive infinity (c language ceil) - TRUNC: round to zero (c language trunc) - ODD: round to odd (Von Neumann rounding) #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::RoundMode` | an enum of type RoundMode | ### StorageAlignedAttr Syntax: `#hivm.storage_aligned` If a module is tagged with this attribute, it means that all of the operations within all device functions in this module are aligned. If a function is tagged with this attribute, it means that all of the operations in this function are aligned. ### StrideAlignDimsAttr Syntax: `#hivm.stride_align_dims` HIVM stride align dims. ### StrideAlignValueInByteAttr Syntax: `#hivm.stride_align_value_in_byte` HIVM stride align value in bytes. ### HIVMSubBlockMappingAttr Syntax: ```mlir #hivm.sub_block< ::mlir::hivm::MappingId # sub_block > ``` HIVM sub block mapping attribute for the cv block dim ratio of mix func. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | sub_block | `::mlir::hivm::MappingId` | an enum of type MappingId | ### SyncBlockInstrModeAttr Syntax: ```mlir #hivm.sync_block_instr_mode< ::mlir::hivm::SyncBlockInstrMode # sync_instr_mode > ``` HIVM synchronization block instruction mode attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | sync_instr_mode | `::mlir::hivm::SyncBlockInstrMode` | an enum of type SyncBlockInstrMode | ### SyncBlockModeAttr Syntax: ```mlir #hivm.sync_block_mode< ::mlir::hivm::SyncBlockMode # sync_mode > ``` HIVM synchronization block mode attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | sync_mode | `::mlir::hivm::SyncBlockMode` | an enum of type SyncBlockMode | ### TCoreTypeAttr Syntax: ```mlir #hivm.tcore_type< ::mlir::hivm::TCoreType # tcoretype > ``` HIVM op core type attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | tcoretype | `::mlir::hivm::TCoreType` | an enum of type TCoreType | ### TCoreTypeMarkerAttr Syntax: ```mlir #hivm.tcore_type_marker< ::mlir::hivm::TCoreType # tcoretype > ``` HIVM op core type marker attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | tcoretype | `::mlir::hivm::TCoreType` | an enum of type TCoreType | ### TFuncCoreTypeAttr Syntax: ```mlir #hivm.func_core_type< ::mlir::hivm::TFuncCoreType # funcCoreType > ``` HIVM function core type attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | funcCoreType | `::mlir::hivm::TFuncCoreType` | an enum of type TFuncCoreType | ### TModuleCoreTypeAttr Syntax: ```mlir #hivm.module_core_type< ::mlir::hivm::TModuleCoreType # moduleCoreType > ``` HIVM module core type attribute. If all of the functions within the module has `AIV` func core type , the module core type is `AIV`. If all of the functions within the module has `AIC` func core type , the module core type is `AIC`. Otherwise, the module core type is `MIX`. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | moduleCoreType | `::mlir::hivm::TModuleCoreType` | an enum of type TModuleCoreType | ### TPartOfMixAttr Syntax: `#hivm.part_of_mix` HIVM function is a part of mix kernel. ### TypeFnAttr Cast for VCastOp Syntax: ```mlir #hivm.cast< ::mlir::hivm::TypeFn # value > ``` HIVM cast attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::TypeFn` | an enum of type TypeFn | ### UnitFlagAttr Syntax: ```mlir #hivm.unit_flag< ::mlir::hivm::UNIT_FLAG # unit_flag > ``` HIVM unit flag attribute for synchronization. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | unit_flag | `::mlir::hivm::UNIT_FLAG` | an enum of type UNIT_FLAG | ### UnlikelyConditionAttr Syntax: `#hivm.unlikely_condition` Specifies that marked condition is unlikely to evaluate to true. ### VFModeAttr HIVM VF Mode Syntax: ```mlir #hivm.vf_mode< ::mlir::hivm::VFMode # value > ``` HIVM VF mode attribute. #### Parameters | Parameter | C++ type | Description | | :-------: | :-------: | ----------- | | value | `::mlir::hivm::VFMode` | an enum of type VFMode | ## Enums ### AddressSpace HIVM Address Space #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | Zero | `0` | zero | | GM | `1` | gm | | L1 | `2` | cbuf | | L0A | `3` | ca | | L0B | `4` | cb | | L0C | `5` | cc | | UB | `6` | ub | ### AlignKind alignment kind information #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | ALIGN | `0` | align | | UNALIGNED | `1` | unaligned | | UNKNOWN | `2` | unknown | ### AtomicKind Atomic Operation Kind for StoreOp #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | NONE | `0` | none | | ADD | `1` | add | | MAX | `2` | max | | MIN | `3` | min | | AND | `4` | and | | OR | `5` | or | | XOR | `6` | xor | | CAS | `7` | or | | XCHG | `8` | xor | | UMAX | `9` | umax | | UMIN | `10` | umin | ### AxisKind hivm op axis kind information #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | FIRST | `0` | first | | MIDDLE | `1` | middle | | LAST | `2` | last | ### CompareMode Compare Mode for VCmpOp #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | EQ | `0` | eq | | NE | `1` | ne | | LT | `2` | lt | | GT | `3` | gt | | GE | `4` | ge | | LE | `5` | le | ### DCCIMode hivm dcci mode #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | SINGLE_CACHE_LINE | `0` | single_cache_line | | ALL_CACHE_LINES | `1` | all_cache_lines | ### DataCacheKind hivm data cache kind #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | ALL | `0` | all | | UB | `1` | ub | | OUT | `2` | out | | ATOMIC | `3` | atomic | ### DataLayout HIVM data layout #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | DOTA_ND | `1` | dotA_ND | | DOTB_ND | `2` | dotB_ND | | DOTC_ND | `3` | dotC_ND | | nZ | `4` | nZ | | zN | `5` | zN | | ND | `6` | ND | ### DeinterleaveMode HIVM deinterleave mode #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | CHANNEL_0 | `0` | CHANNEL_0 | | CHANNEL_1 | `1` | CHANNEL_1 | | ALL_CHANNELS | `999` | ALL_CHANNELS | ### DescaleMode descale mode for matmul #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | DescaleNull | `0` | DescaleNull | | DescalePerChannel | `1` | DescalePerChannel | | DescalePerTensor | `2` | DescalePerTensor | ### EVENT Event ID for Synchronization #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | EVENT_ID0 | `0` | EVENT_ID0 | | EVENT_ID1 | `1` | EVENT_ID1 | | EVENT_ID2 | `2` | EVENT_ID2 | | EVENT_ID3 | `3` | EVENT_ID3 | | EVENT_ID4 | `4` | EVENT_ID4 | | EVENT_ID5 | `5` | EVENT_ID5 | | EVENT_ID6 | `6` | EVENT_ID6 | | EVENT_ID7 | `7` | EVENT_ID7 | ### FixpipePreQuantMode HIVM fixpipe pre_quant mode #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | NO_QUANT | `0` | NO_QUANT | | S322I8 | `9` | S322I8 | | F322F16 | `1` | F322F16 | | F322BF16 | `16` | F322BF16 | ### FixpipePreReluMode HIVM fixpipe pre_relu mode #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | NO_RELU | `0` | NO_RELU | | NORMAL_RELU | `1` | NORMAL_RELU | | LEAKY_RELU | `2` | LEAKY_RELU | | P_RELU | `3` | P_RELU | ### IteratorType HIVM Structured Op Iterator Type #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | kParallel | `0` | parallel | | kBroadcast | `1` | broadcast | | kTranspose | `2` | transpose | | kReduction | `3` | reduction | | kInterleave | `4` | interleave | | kDeinterleave | `5` | deinterleave | | kInverse | `6` | inverse | | kPad | `7` | pad | | kConcat | `8` | concat | | kGather | `9` | gather | | kCumulative | `10` | cumulative | | kOpaque | `99` | opaque | ### MatmulBiasMode bias mode for local matmul op #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | NoBias | `0` | NoBias | | PerChannelAdd | `1` | PerChannelAdd | | PerChannelAddWithSplitK | `2` | PerChannelAddWithSplitK | | ElementwiseCrossLoopAdd | `4` | ElementwiseCrossLoopAdd | | ElementwiseAdd | `3` | ElementwiseAdd | ### MemPlanMode Mem Plan Mode #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | LOCAL_MEM_PLAN | `0` | LOCAL_MEM_PLAN | | GLOBAL_WORKSPACE_PLAN | `1` | GLOBAL_WORKSPACE_PLAN | ### PadMode Pad Mode for LoadOp #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | PadNull | `0` | PadNull | | PadFirstElem | `1` | PadFirstElem | | PadValue | `2` | PadValue | ### PIPE HIVM Op Pipe #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | PIPE_S | `0` | PIPE_S | | PIPE_V | `1` | PIPE_V | | PIPE_M | `2` | PIPE_M | | PIPE_MTE1 | `3` | PIPE_MTE1 | | PIPE_MTE2 | `4` | PIPE_MTE2 | | PIPE_MTE3 | `5` | PIPE_MTE3 | | PIPE_ALL | `6` | PIPE_ALL | | PIPE_MTE4 | `7` | PIPE_MTE4 | | PIPE_MTE5 | `8` | PIPE_MTE5 | | PIPE_V2 | `9` | PIPE_V2 | | PIPE_FIX | `10` | PIPE_FIX | | VIRTUAL_PIPE_MTE2_L1A | `11` | VIRTUAL_PIPE_MTE2_L1A | | VIRTUAL_PIPE_MTE2_L1B | `12` | VIRTUAL_PIPE_MTE2_L1B | | PIPE_NUM | `13` | PIPE_NUM | | PIPE_UNASSIGNED | `99` | PIPE_UNASSIGNED | ### ReduceOperation Reduction kind for VReduceOp #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | sum | `1` | sum | | prod | `2` | prod | | max | `3` | max | | min | `4` | min | | max_with_index_left | `5` | max_with_index_left | | max_with_index_right | `6` | max_with_index_right | | min_with_index_left | `7` | min_with_index_left | | min_with_index_right | `8` | min_with_index_right | | any | `9` | any | | all | `10` | all | | xori | `11` | xori | | ori | `12` | ori | | andi | `13` | andi | | none | `0` | none | ### RoundMode Round Mode for VCastOp #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | RINT | `0` | rint | | ROUND | `1` | round | | FLOOR | `2` | floor | | CEIL | `3` | ceil | | TRUNC | `4` | trunc | | ODD | `5` | odd | | TRUNCWITHOVERFLOW | `6` | truncwithoverflow | ### SyncBlockInstrMode HIVM SyncBlockInstrMode #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | INTER_BLOCK_SYNCHRONIZATION | `0` | INTER_BLOCK_SYNCHRONIZATION | | INTER_SUBBLOCK_SYNCHRONIZATION | `1` | INTER_SUBBLOCK_SYNCHRONIZATION | | INTRA_BLOCK_SYNCHRONIZATION | `2` | INTRA_BLOCK_SYNCHRONIZATION | ### SyncBlockMode HIVM SyncBlockMode #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | ALL_CUBE | `0` | ALL_CUBE | | ALL_VECTOR | `1` | ALL_VECTOR | | ALL_SUB_VECTOR | `2` | ALL_SUB_VECTOR | | BARRIER_CUBE | `3` | BARRIER_CUBE | | BARRIER_VECTOR | `4` | BARRIER_VECTOR | | ALL | `5` | ALL | ### TCoreType HIVM Op Core Type #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | CUBE | `1` | CUBE | | VECTOR | `2` | VECTOR | | CUBE_OR_VECTOR | `3` | CUBE_OR_VECTOR | | CUBE_AND_VECTOR | `4` | CUBE_AND_VECTOR | ### TFuncCoreType HIVM Function Core Type #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | AIC | `1` | AIC | | AIV | `2` | AIV | | MIX | `3` | MIX | | AIC_OR_AIV | `4` | AIC_OR_AIV | ### TModuleCoreType HIVM Module Core Type #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | AIC | `1` | AIC | | AIV | `2` | AIV | | MIX | `3` | MIX | ### TypeFn Cast for VCastOp #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | cast_signed | `0` | cast_signed | | cast_unsigned | `1` | cast_unsigned | | bitcast | `2` | bitcast | ### UNIT_FLAG Unit Flag Mode for Synchronization #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | DISABLED | `0` | DISABLED | | RESERVED | `1` | RESERVED | | ENABLED_WITHOUT_UPDATE | `2` | ENABLED_WITHOUT_UPDATE | | ENABLED_WITH_UPDATE | `3` | ENABLED_WITH_UPDATE | | ENABLED_ONLY_LAST_ITER | `4` | ENABLED_ONLY_LAST_ITER | | ENABLED_ONLY_FIRST_ITER | `5` | ENABLED_ONLY_FIRST_ITER | | ENABLED_ONLY_FIRST_AND_LAST_ITERS | `6` | ENABLED_ONLY_FIRST_AND_LAST_ITERS | ### VFMode HIVM VF Mode #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | SIMD | `0` | SIMD | | SIMT | `1` | SIMT | | MIX | `2` | MIX | ### MappingId Mapping ids for loop mapping #### Cases | Symbol | Value | String | | :----: | :---: | ------ | | DimX | `0` | x |