HCgemvBatched【免费下载链接】sip本项目是CANN提供的一款高效、可靠的高性能信号处理算子加速库基于华为Ascend AI处理器专门为信号处理领域而设计。项目地址: https://gitcode.com/cann/sip产品支持情况产品是否支持Atlas 200I/500 A2 推理产品×Atlas 推理系列产品×Atlas 训练系列产品×Atlas A3 训练系列产品/Atlas A3 推理系列产品√Atlas A2 训练系列产品/Atlas A2 推理系列产品√Ascend 950PR/Ascend 950DT×功能说明接口功能asdBlasMakeHCgemvBatchedPlan初始化该句柄对应的算子配置。asdBlasHCgemvBatched用于计算批量复数矩阵与向量的乘积。计算公式$$ y alpha * op(A)*x beta * y\$$其中op表示矩阵A做“共轭转置”或者“非转置”的操作。示例输入“inTensorA[i]”为[ [ 1i, 12i ], [ 13i, 14i ] ]输入“x[i]”为[ 1i, 1i ]输入“trans”为 N表示矩阵A非转置。输入“m”为2输入“n”为 2输入“alpha”为10i“beta”为00i。输入“lda”为 2。输入“batchCount”为1。调用“asdBlasHCgemvBatched”算子后输出“y[i]”为[04i, 08i]函数原型AspbStatus asdBlasMakeHCgemvBatchedPlan( asdBlasHandle handle, asdBlasOperation_t trans, const int64_t m)AspbStatus asdBlasHCgemvBatched( asdBlasHandle handle, asdBlasOperation_t trans, const int64_t m, const int64_t n, const std::complexop::fp16_t alpha, aclTensor * A, const int64_t lda, aclTensor * x, const int64_t incx, const std::complexop::fp16_t beta, aclTensor * y, const int64_t incy, const int64_t batchCount)asdBlasMakeHCgemvBatchedPlan参数说明参数名输入/输出描述handleasdBlasHandle输入算子的句柄transasdBlasOperation_t输入指定矩阵A是否需要转置。ASDBLAS_OP_N不转置ASDBLAS_OP_T转置ASDBLAS_OP_C共轭转置mint64_t输入单批次矩阵A的行数。返回值返回状态码具体参见SiP返回码。asdBlasHCgemvBatched参数说明参数名输入/输出描述handleasdBlasHandle输入算子的句柄transasdBlasOperation_t输入指定矩阵A是否需要转置。ASDBLAS_OP_N不转置ASDBLAS_OP_T转置ASDBLAS_OP_C共轭转置mint64_t输入单批次矩阵A的行数。nint64_t输入单批次矩阵A的列数。alphastd::complexop::fp16_t 输入对应公式中的alpha复数标量用于乘以矩阵和向量乘法的结果当前版本alpha的取值只能为10i。AaclTensor *输入输入的矩阵对应公式中的A。行主序。数据类型支持COMPLEX32。数据格式支持ND。shape为[batchCount ,m, n]。lda int64_t输入A左右相邻元素间的内存地址偏移量当前约束为m。xaclTensor *输入输入的矩阵对应公式中的x。数据类型支持COMPLEX32。数据格式支持ND。当“trans”取值为非转置shape为[batchCount, n]。当“trans”取值为共轭转置shape为[batchCount, m]。incxint64_t输入向量x的步长当前约束为1。betastd::complexop::fp16_t 输入对应公式中的beta复数标量用于乘以向量y当前版本beta的取值只能为00i。yaclTensor *输入/输出对应公式中的y。数据类型支持COMPLEX32。数据格式支持ND。当“trans”取值为非转置shape为[batchCount, m]。当“trans”取值为共轭转置shape为[batchCount, n]。incyint64_t输入向量y的步长当前约束为1。batchCountint64_t输入批次数量。取值范围为{12 - 314496}。返回值返回状态码具体参见SiP返回码。约束说明无调用示例示例代码如下该样例旨在提供快速上手、开发和调试算子的最小化实现其核心目标是使用最精简的代码展示算子的核心功能而非提供生产级的安全保障。不推荐用户直接将示例代码作为业务代码若用户将示例代码应用在自身的真实业务场景中且发生了安全问题则需用户自行承担。#include iostream #include vector #include complex #include asdsip.h #include acl/acl.h #include acl_meta.h using namespace AsdSip; #define ASD_STATUS_CHECK(err) \ do { \ AsdSip::AspbStatus err_ (err); \ if (err_ ! AsdSip::ErrorType::ACL_SUCCESS) { \ std::cout Execute failed. std::endl; \ exit(-1); \ } else { \ std::cout Execute successfully. std::endl; \ } \ } while (0) void printTensor(const std::complexop::fp16_t *tensorData, int64_t batch, int64_t rows, int64_t cols) { for(int64_t b 0; b batch; b) { for (int64_t i 0; i rows; i) { for (int64_t j 0; j cols; j) { auto data tensorData[b * rows * cols i * cols j]; std::cout ( (float)data.real() , (float)data.imag() ) ; } std::cout std::endl; } std::cout std::endl; } } #define CHECK_RET(cond, return_expr) \ do { \ if (!(cond)) { \ return_expr; \ } \ } while (0) #define LOG_PRINT(message, ...) \ do { \ printf(message, ##__VA_ARGS__); \ } while (0) int64_t GetShapeSize(const std::vectorint64_t shape) { int64_t shapeSize 1; for (auto i : shape) { shapeSize * i; } return shapeSize; } int Init(int32_t deviceId, aclrtStream *stream) { // 固定写法acl初始化 auto ret aclInit(nullptr); CHECK_RET(ret ::ACL_SUCCESS, LOG_PRINT(aclInit failed. ERROR: %d\n, ret); return ret); ret aclrtSetDevice(deviceId); CHECK_RET(ret ::ACL_SUCCESS, LOG_PRINT(aclrtSetDevice failed. ERROR: %d\n, ret); return ret); ret aclrtCreateStream(stream); CHECK_RET(ret ::ACL_SUCCESS, LOG_PRINT(aclrtCreateStream failed. ERROR: %d\n, ret); return ret); return 0; } template typename T int CreateAclTensor(const std::vectorT hostData, const std::vectorint64_t shape, void **deviceAddr, aclDataType dataType, aclTensor **tensor) { auto size GetShapeSize(shape) * sizeof(T); // 调用aclrtMalloc申请device侧内存 auto ret aclrtMalloc(deviceAddr, size, ACL_MEM_MALLOC_HUGE_FIRST); CHECK_RET(ret ::ACL_SUCCESS, LOG_PRINT(aclrtMalloc failed. ERROR: %d\n, ret); return ret); // 调用aclrtMemcpy将host侧数据复制到device侧内存上 ret aclrtMemcpy(*deviceAddr, size, hostData.data(), size, ACL_MEMCPY_HOST_TO_DEVICE); CHECK_RET(ret ::ACL_SUCCESS, LOG_PRINT(aclrtMemcpy failed. ERROR: %d\n, ret); return ret); // 计算连续tensor的strides std::vectorint64_t strides(shape.size(), 1); for (int64_t i shape.size() - 2; i 0; i--) { strides[i] shape[i 1] * strides[i 1]; } // 调用aclCreateTensor接口创建aclTensor *tensor aclCreateTensor(shape.data(), shape.size(), dataType, strides.data(), 0, aclFormat::ACL_FORMAT_ND, shape.data(), shape.size(), *deviceAddr); return 0; } int main(int argc, char **argv) { int deviceId 0; aclrtStream stream; auto ret Init(deviceId, stream); CHECK_RET(ret ::ACL_SUCCESS, LOG_PRINT(Init acl failed. ERROR: %d\n, ret); return ret); int64_t batch 3; int64_t m 3; int64_t n 3; int64_t lda m; int incx 1; int incy 1; std::complexop::fp16_t alpha std::complexop::fp16_t(1.0, 0.0); std::complexop::fp16_t beta std::complexop::fp16_t(0.0, 0.0); asdBlasOperation_t trans asdBlasOperation_t::ASDBLAS_OP_N; int64_t aSize batch * m * n; int64_t xSize batch * n; int64_t ySize batch * m; std::vectorstd::complexop::fp16_t tensorInAData; tensorInAData.reserve(aSize); for (int64_t b 0; b batch; b) { for (int64_t i 0; i m; i) { for (int64_t j 0; j n; j) { tensorInAData[b * m * n i * n j] std::complexop::fp16_t(i 0.0f, i 0.0f); } } } std::vectorstd::complexop::fp16_t tensorInXData; tensorInXData.reserve(xSize); for (int64_t b 0; b batch; b) { for (int64_t i 0; i n; i) { tensorInXData[b * n i] std::complexop::fp16_t(i 1.0f, 2.0f); } } std::vectorstd::complexop::fp16_t tensorInYData; tensorInYData.reserve(ySize); for (int64_t b 0; b batch; b) { for (int64_t i 0; i m; i) { tensorInYData[b * m i] std::complexop::fp16_t(1.0f, 1.0f); } } std::cout trans static_castint32_t(trans) std::endl; std::cout alpha ( (float)alpha.real() , (float)alpha.imag() ) std::endl; std::cout beta ( (float)beta.real() , (float)beta.imag() ) std::endl; std::cout ------- input TensorInA ------- std::endl; printTensor(tensorInAData.data(), batch, m, n); std::cout ------- input TensorInX ------- std::endl; printTensor(tensorInXData.data(), batch, 1, n); std::cout ------- input TensorInY ------- std::endl; printTensor(tensorInYData.data(), batch, 1, m); std::vectorint64_t aShape {batch, m, n}; std::vectorint64_t xShape {batch, n}; std::vectorint64_t yShape {batch, m}; aclTensor *inputA nullptr; aclTensor *inputX nullptr; aclTensor *inputY nullptr; void *inputADeviceAddr nullptr; void *inputXDeviceAddr nullptr; void *inputYDeviceAddr nullptr; ret CreateAclTensor(tensorInAData, aShape, inputADeviceAddr, aclDataType::ACL_COMPLEX32, inputA); CHECK_RET(ret ::ACL_SUCCESS, return ret); ret CreateAclTensor(tensorInXData, xShape, inputXDeviceAddr, aclDataType::ACL_COMPLEX32, inputX); CHECK_RET(ret ::ACL_SUCCESS, return ret); ret CreateAclTensor(tensorInYData, yShape, inputYDeviceAddr, aclDataType::ACL_COMPLEX32, inputY); CHECK_RET(ret ::ACL_SUCCESS, return ret); asdBlasHandle handle; asdBlasCreate(handle); size_t lwork 0; void *buffer nullptr; asdBlasMakeHCgemvBatchedPlan(handle, trans, m); asdBlasGetWorkspaceSize(handle, lwork); std::cout lwork lwork std::endl; if (lwork 0) { ret aclrtMalloc(buffer, static_castint64_t(lwork), ACL_MEM_MALLOC_HUGE_FIRST); CHECK_RET(ret ::ACL_SUCCESS, LOG_PRINT(allocate workspace failed. ERROR: %d\n, ret); return ret); } asdBlasSetWorkspace(handle, buffer); asdBlasSetStream(handle, stream); ASD_STATUS_CHECK(asdBlasHCgemvBatched(handle, trans, m, n, alpha, inputA, lda, inputX, incx, beta, inputY, incy, batch)); asdBlasSynchronize(handle); asdBlasDestroy(handle); ret aclrtMemcpy(tensorInYData.data(), ySize * sizeof(std::complexop::fp16_t), inputYDeviceAddr, ySize * sizeof(std::complexop::fp16_t), ACL_MEMCPY_DEVICE_TO_HOST); CHECK_RET(ret ::ACL_SUCCESS, LOG_PRINT(copy y from device to host failed. ERROR: %d\n, ret); return ret); std::cout ------- output TensorInY ------- std::endl; printTensor(tensorInYData.data(), batch, 1, m); aclDestroyTensor(inputX); aclDestroyTensor(inputY); aclDestroyTensor(inputA); aclrtFree(inputXDeviceAddr); aclrtFree(inputYDeviceAddr); aclrtFree(inputADeviceAddr); aclrtDestroyStream(stream); aclrtResetDevice(deviceId); aclFinalize(); return 0; }【免费下载链接】sip本项目是CANN提供的一款高效、可靠的高性能信号处理算子加速库基于华为Ascend AI处理器专门为信号处理领域而设计。项目地址: https://gitcode.com/cann/sip创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考