-
Notifications
You must be signed in to change notification settings - Fork 767
Qualcomm AI Engine Direct - Support Debug Handle and Integrate IntermediateOutputCapturer #16316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16316
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit 00c4f7e with merge base 3233761 ( NEW FAILURE - The following job has failed:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
|
Is the PR ready to be reviewed now? |
| For any passes executed during qnn_preprocess, users will need to handle debug_handle ID themselves. | ||
|
|
||
| Description: During passes transformation, some passes might be copying some node's meta when creating a new node, | ||
| which means multiple nodes might be sharing the same debug_handle ID while it shouldn't. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im not super understand here: if several nodes comes from one acient node (e..g doing decomposition on some op), they should have the same debug handle for tracing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the idea is that if we decompose the node but never assign a new handle ID, we are just saving the information for the last decomposed node rather than all decomposed node. I have draw an example below. Since edge and QNN has 1 to 1 mapping in this case, I think it would be better to gather all possible information rather than the last node's debug info. Since we reassign graph_handle, instead of only getting the output of node2, we can also get info for node1.

There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here im a little confused: when we see the qnn graph, how can we know that the qnn_node_1 and qnn_node2 here comes from a same super node? Or another q might be, which graph will play as the ground truth graph, when you doing intermediate comparsion?
gather all possible information rather than the last node's debug info.
We won't gather only the last node debug info, but all info.
In ExecuTorch normally we follow this rule:
if we transform {old_node_1, old_node_2, ..., old_node_n} into {new_node_1, new_node_2, ..., new_node_m}, where n and m can be arbitrary number starting from 1, then: eery new_node should have same debug handle, and the debug handle will be set(old_node_1.debug_handle + old_node_2.debug_handle, ..., old_node_n.debug_handle)
you can see if n is 1, this transform will be a operator decomposition; if m is 1, this transform will be a operator fusion, etc.
In this way whenever we see an arbitrary new_node, we will know its ancestor.
Not sure if that make sense to you?
9a7ca59 to
dc72614
Compare
Hi @Gasoonjia, |
|
Hi @cccclai, @Gasoonjia, @kimishpatel, Also, I would also like to get some suggestions on the official API to retrieve an edge IR. The current way of retrieving an edge IR is through: executorch/examples/qualcomm/utils.py Line 499 in 0fb422f
However, I encountered following issues when retrieving edge IR using the above method.
Thanks |
|
I think instead of using the edge graph IR as the ground truth for comparsion, it will be great if we can use the export program ET stack get at the first place (e.g. the export graph of executorch/examples/qualcomm/utils.py Line 480 in 0fb422f
You can see how we calculate intermediate output numercal descrepancy: executorch/devtools/inspector/_inspector.py Line 1407 in 0fb422f
https://github.com/pytorch/executorch/blob/0fb422f9c59e0e5526c0082352a583baf0510fb7/exir/passes/debug_handle_generator_pass.py here's pass for debug handle generation, where the debug handle of a node is the same as the node sharing the same greatest ancestor node in the export flow. |
|
Here's an example of how our current API works on VIT model on xnnpack backend: https://gist.github.com/Gasoonjia/db6285ac39ad5759b95c7a92d37cd4f8 and below is the expected output. For some ops like layernorm there're still some issue i need to fix.
I would love to chat with you regarding how we can make the pipeline works on qualcomm backend! |
…ediateOutputCapturer
dc72614 to
00c4f7e
Compare
|
Hi @Gasoonjia, |
Summary
Additional Topics:
Test plan
python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleUtilsScript.test_intermediate_debugger -s $DEVICE --model SM8650 --build_folder build-android/ --executorch_root . --image_dataset ../imagenet-mini/val/ --artifact ./e2e_test_debugpython backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedUtils.test_qnn_backend_dump_intermediate_outputs_simple_model --model SM8550 --device $DEVICE --build_folder build-androidpython backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedUtils.test_qnn_backend_dump_intermediate_outputs_topk --model SM8550 --device $DEVICE --build_folder build-android