Python module
dtype
Provides data type definitions for tensors in MAX Engine. These data types are essential for defining the precision and memory layout of tensor data when working with machine learning models.
This module defines the DType enum, which represents all supported tensor
data types in MAX Engine, including:
- Integer types (signed and unsigned):
int8|uint8|int16|uint16|int32|uint32|int64|uint64 - Floating-point types (
float8variants):float16|bfloat16|float32|float64 - Boolean type:
bool
The module also provides utilities for converting between MAX Engine data types and NumPy dtypes, making it easy to interoperate with the NumPy ecosystem.
import numpy as np
from max.dtype import DType
tensor = np.zeros((2, 3), dtype=DType.float32.to_numpy())
# Convert NumPy dtype to MAX DType
array = np.ones((4, 4), dtype=np.float16)
max_dtype = DType.from_numpy(array.dtype)
# Check properties of data types
is_float = DType.float32.is_float() # True
is_int = DType.int64.is_integral() # True
size = DType.float64.size_in_bytes # 8DTypeâ
class max.dtype.DType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
The tensor data type.
alignâ
property align
Returns the alignment requirement of the data type in bytes.
The alignment specifies the memory boundary that values of this data type must be aligned to for optimal performance and correctness.
bfloat16â
bfloat16 = 80
boolâ
bool = 1
float16â
float16 = 79
float32â
float32 = 81
float4_e2m1fnâ
float4_e2m1fn = 64
float64â
float64 = 82
float8_e4m3fnâ
float8_e4m3fn = 75
float8_e4m3fnuzâ
float8_e4m3fnuz = 76
float8_e5m2â
float8_e5m2 = 77
float8_e5m2fnuzâ
float8_e5m2fnuz = 78
float8_e8m0fnuâ
float8_e8m0fnu = 73
from_numpy()â
from_numpy()
Converts a NumPy dtype to the corresponding DType.
-
Parameters:
-
dtype (np.dtype) â The NumPy dtype to convert.
-
Returns:
-
The corresponding DType enum value.
-
Return type:
-
Raises:
-
ValueError â If the input dtype is not supported.
from_torch()â
from_torch(_error=None)
int16â
int16 = 137
int32â
int32 = 139
int64â
int64 = 141
int8â
int8 = 135
is_float()â
is_float(self) â bool
Checks if the data type is a floating-point type.
is_float8()â
is_float8(self) â bool
Checks if the data type is an 8-bit floating-point type.
is_half()â
is_half(self) â bool
Checks if the data type is a half-precision floating-point type.
is_integral()â
is_integral(self) â bool
Checks if the data type is an integer type.
is_signed_integral()â
is_signed_integral(self) â bool
Checks if the data type is a signed integer type.
is_unsigned_integral()â
is_unsigned_integral(self) â bool
Checks if the data type is an unsigned integer type.
size_in_bitsâ
property size_in_bits
Returns the size of the data type in bits.
This indicates how many bits are required to store a single value of this data type in memory.
size_in_bytesâ
property size_in_bytes
Returns the size of the data type in bytes.
This indicates how many bytes are required to store a single value of this data type in memory.
to_numpy()â
to_numpy()
Converts this DType to the corresponding NumPy dtype.
-
Returns:
-
The corresponding NumPy dtype object.
-
Return type:
-
Raises:
-
ValueError â If the dtype is not supported.
-
Parameters:
-
self (DType)
to_torch()â
to_torch(_error=None)
uint16â
uint16 = 136
uint32â
uint32 = 138
uint64â
uint64 = 140
uint8â
uint8 = 134
finfoâ
class max.dtype.finfo(dtype)
Numerical properties of a floating point max.dtype.DType.
This is modeled after torch.finfo, providing bits, eps,
max, min, tiny, smallest_normal, and dtype
attributes for every MAX float dtypeâincluding bfloat16, float8, and
float4 types that numpy cannot represent.
-
Parameters:
-
dtype (DType) â A floating-point
DTypeto query. -
Raises:
-
TypeError â If dtype is not a floating-point type.
bitsâ
bits: int
dtypeâ
dtype: DType
epsâ
eps: float
maxâ
max: float
minâ
min: float
smallest_normalâ
property smallest_normal: float
Alias for tiny (torch.finfo compatibility).
tinyâ
tiny: float
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!