Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【PaddlePaddle Hackathon】31. 为 Paddle Inference 添加新的前端语言 #37160

Closed
wants to merge 6 commits into from

Conversation

chenyanlann
Copy link
Contributor

PR types

New features

PR changes

为 Paddle Inference 添加 Java Apis

Describe

Task: #35977

Paddle Inference java API

Paddle Inference java API 基于 capi 和 jni 实现,需要您提前准备好C预测库。

安装

1.下载C预测库

您可以选择直接下载paddle_inference_c预测库,或通过源码编译的方式安装,源码编译方式参考官网文档,注意这里cmake编译时打开-DON_INFER=ON,在编译目录下得到paddle_inference_c_install_dir

2.准备预测部署模型

下载 resnet50 模型后解压,得到 Paddle Combined 形式的模型。

wget https://paddle-inference-dist.bj.bcebos.com/Paddle-Inference-Demo/resnet50.tgz
tar zxf resnet50.tgz

#### 获得 resnet50 目录结构如下
resnet50/
├── inference.pdmodel
├── inference.pdiparams
└── inference.pdiparams.info
3.准备预测执行目录
git clone github.com/paddlepaddle/paddle/paddle/fluid/inference/javaapi
3. 编译动态链接库和jar包
在javaapi目录下执行

./build_gpu.sh {c预测库目录} {jni头文件目录} {jni系统头文件目录}

以笔者的目录结构为例
./build_gpu.sh /root/paddle_c/paddle_inference_c /usr/lib/jvm/java-8-openjdk-amd64/include /usr/lib/jvm/java-8-openjdk-amd64/include/linux

执行完成后,会在当前目录下生成JavaInference.jar和libpaddle_inference.so
5.运行单测,验证
在javaapi目录下执行

./test.sh {c预测库目录} {.pdmodel文件目录} {.pdiparams文件目录}

以笔者的目录结构为例
./test.sh "/root/paddle_c/paddle_inference_c_2.2/paddle_inference_c"  "/root/paddle_c/resnet50/inference.pdmodel" "/root/paddle_c/resnet50/inference.pdiparams"

在Java中使用Paddle预测

首先创建预测配置

Config config = new Config();
config.setCppModel(model_file, params_file);

创建predictor

Predictor predictor = Predictor.createPaddlePredictor(config);

获取输入Tensor

String inNames = predictor.getInputNameById(0);
Tensor inHandle = predictor.getInputHandle(inNames);

设置输入数据(假设只有一个输入)

inHandle.Reshape(4, new int[]{1, 3, 224, 224});
float[] inData = new float[1*3*224*224];
inHandle.CopyFromCpu(inData);

运行预测

predictor.Run();

获取输出Tensor

String outNames = predictor.getOutputNameById(0);
Tensor outHandle = predictor.getOutputHandle(outNames);
float[] outData = new float[outHandle.GetSize()];
outHandle.CopyToCpu(outData);

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ chenyanlann
❌ chenyanlan


chenyanlan seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants